Test Report: Docker_Linux_crio 17731

                    
                      2299ceaec17b686deec86f12c40bdefcf1fe6842:2023-12-05:32161
                    
                

Test fail (7/315)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.25
36 TestAddons/parallel/InspektorGadget 482.54
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 7.12
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.51
216 TestMultiNode/serial/PingHostFrom2Pods 3.23
238 TestRunningBinaryUpgrade 99.3
271 TestStoppedBinaryUpgrade/Upgrade 69.58
x
+
TestAddons/parallel/Ingress (152.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-030936 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-030936 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-030936 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [842b11fd-496b-475f-a7ac-194551be87be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [842b11fd-496b-475f-a7ac-194551be87be] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.008182688s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-030936 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.150371514s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-030936 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-030936 addons disable ingress --alsologtostderr -v=1: (7.665651181s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-030936
helpers_test.go:235: (dbg) docker inspect addons-030936:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b",
	        "Created": "2023-12-05T19:35:38.531584162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:35:38.85870844Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:87b04fa850a730e5ca832acdf82e6994855a857f2c65a1e9dfd36c86f13b161b",
	        "ResolvConfPath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/hosts",
	        "LogPath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b-json.log",
	        "Name": "/addons-030936",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-030936:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-030936",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf-init/diff:/var/lib/docker/overlay2/8cb0dc756d42dafb4250d739248baa62eaad1aada62df117f76ff2e087cad9b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-030936",
	                "Source": "/var/lib/docker/volumes/addons-030936/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-030936",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-030936",
	                "name.minikube.sigs.k8s.io": "addons-030936",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcf3dd2c351f47a3d797a0c5c53111895392c7483ad65d6cb7e5a691dde8a064",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fcf3dd2c351f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-030936": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d8edc52e4b3c",
	                        "addons-030936"
	                    ],
	                    "NetworkID": "93543a5ccc9738ba72bc1f7a0af74a705b7fbe3a0583a577dc8b0d1ca5a409a8",
	                    "EndpointID": "484985b822a232480e61c2bc20afa4c3d3d8a6040b0eb69aad112b6ff36d5767",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-030936 -n addons-030936
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-030936 logs -n 25: (1.196151085s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-428164                                                                     | download-only-428164   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-428164                                                                     | download-only-428164   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-383682 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | download-docker-383682                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-383682                                                                   | download-docker-383682 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-319231   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | binary-mirror-319231                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32971                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-319231                                                                     | binary-mirror-319231   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-030936                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-030936                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-030936 --wait=true                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-030936 addons                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-030936 ssh cat                                                                       | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | /opt/local-path-provisioner/pvc-c0670ccc-a245-46b9-8552-084bf6aa50cf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | addons-030936                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | -p addons-030936                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-030936 ip                                                                            | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-030936 ssh curl -s                                                                   | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | -p addons-030936                                                                            |                        |         |         |                     |                     |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-030936 addons                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-030936 addons                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-030936 ip                                                                            | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:15.725024   13952 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:15.725154   13952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:15.725162   13952 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:15.725166   13952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:15.725350   13952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 19:35:15.725960   13952 out.go:303] Setting JSON to false
	I1205 19:35:15.726755   13952 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1068,"bootTime":1701803848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:15.726815   13952 start.go:138] virtualization: kvm guest
	I1205 19:35:15.729414   13952 out.go:177] * [addons-030936] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:15.731051   13952 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:35:15.732594   13952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:15.731058   13952 notify.go:220] Checking for updates...
	I1205 19:35:15.734341   13952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:35:15.735934   13952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:35:15.737432   13952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:35:15.739036   13952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:15.740630   13952 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:35:15.760271   13952 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:15.760388   13952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:15.812013   13952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:35:15.803129407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:35:15.812112   13952 docker.go:295] overlay module found
	I1205 19:35:15.814181   13952 out.go:177] * Using the docker driver based on user configuration
	I1205 19:35:15.815818   13952 start.go:298] selected driver: docker
	I1205 19:35:15.815830   13952 start.go:902] validating driver "docker" against <nil>
	I1205 19:35:15.815840   13952 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:15.816633   13952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:15.865741   13952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:35:15.8579075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:35:15.865889   13952 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:35:15.866107   13952 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:15.868310   13952 out.go:177] * Using Docker driver with root privileges
	I1205 19:35:15.870127   13952 cni.go:84] Creating CNI manager for ""
	I1205 19:35:15.870149   13952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:15.870159   13952 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:35:15.870169   13952 start_flags.go:323] config:
	{Name:addons-030936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:15.871951   13952 out.go:177] * Starting control plane node addons-030936 in cluster addons-030936
	I1205 19:35:15.873285   13952 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:15.874705   13952 out.go:177] * Pulling base image ...
	I1205 19:35:15.875952   13952 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:15.875990   13952 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:15.876003   13952 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:15.876057   13952 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:15.876118   13952 preload.go:174] Found /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:35:15.876132   13952 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:35:15.876523   13952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/config.json ...
	I1205 19:35:15.876551   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/config.json: {Name:mk6feeae17388382e4bfff44f115f9965b601900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:15.891146   13952 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:15.891258   13952 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:15.891274   13952 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:15.891278   13952 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:15.891293   13952 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:15.891300   13952 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from local cache
	I1205 19:35:27.235843   13952 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from cached tarball
	I1205 19:35:27.235886   13952 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:35:27.235915   13952 start.go:365] acquiring machines lock for addons-030936: {Name:mk83ff218c25043d0e306eee7870b5366e64c5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:27.236015   13952 start.go:369] acquired machines lock for "addons-030936" in 81.288µs
	I1205 19:35:27.236039   13952 start.go:93] Provisioning new machine with config: &{Name:addons-030936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:35:27.236116   13952 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:35:27.238279   13952 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 19:35:27.238501   13952 start.go:159] libmachine.API.Create for "addons-030936" (driver="docker")
	I1205 19:35:27.238527   13952 client.go:168] LocalClient.Create starting
	I1205 19:35:27.238613   13952 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem
	I1205 19:35:27.336238   13952 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem
	I1205 19:35:27.519096   13952 cli_runner.go:164] Run: docker network inspect addons-030936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:35:27.534392   13952 cli_runner.go:211] docker network inspect addons-030936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:35:27.534467   13952 network_create.go:281] running [docker network inspect addons-030936] to gather additional debugging logs...
	I1205 19:35:27.534490   13952 cli_runner.go:164] Run: docker network inspect addons-030936
	W1205 19:35:27.548744   13952 cli_runner.go:211] docker network inspect addons-030936 returned with exit code 1
	I1205 19:35:27.548769   13952 network_create.go:284] error running [docker network inspect addons-030936]: docker network inspect addons-030936: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-030936 not found
	I1205 19:35:27.548780   13952 network_create.go:286] output of [docker network inspect addons-030936]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-030936 not found
	
	** /stderr **
	I1205 19:35:27.548874   13952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:35:27.564142   13952 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002cdb4d0}
	I1205 19:35:27.564181   13952 network_create.go:124] attempt to create docker network addons-030936 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:35:27.564248   13952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-030936 addons-030936
	I1205 19:35:27.876420   13952 network_create.go:108] docker network addons-030936 192.168.49.0/24 created
	I1205 19:35:27.876449   13952 kic.go:121] calculated static IP "192.168.49.2" for the "addons-030936" container
	I1205 19:35:27.876497   13952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:35:27.890973   13952 cli_runner.go:164] Run: docker volume create addons-030936 --label name.minikube.sigs.k8s.io=addons-030936 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:35:27.993659   13952 oci.go:103] Successfully created a docker volume addons-030936
	I1205 19:35:27.993748   13952 cli_runner.go:164] Run: docker run --rm --name addons-030936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-030936 --entrypoint /usr/bin/test -v addons-030936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:35:33.284270   13952 cli_runner.go:217] Completed: docker run --rm --name addons-030936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-030936 --entrypoint /usr/bin/test -v addons-030936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (5.29047996s)
	I1205 19:35:33.284296   13952 oci.go:107] Successfully prepared a docker volume addons-030936
	I1205 19:35:33.284314   13952 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:33.284331   13952 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:35:33.284374   13952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-030936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:35:38.464120   13952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-030936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.179701172s)
	I1205 19:35:38.464150   13952 kic.go:203] duration metric: took 5.179814 seconds to extract preloaded images to volume
	W1205 19:35:38.464296   13952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:35:38.464381   13952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:35:38.516514   13952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-030936 --name addons-030936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-030936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-030936 --network addons-030936 --ip 192.168.49.2 --volume addons-030936:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:35:38.867220   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Running}}
	I1205 19:35:38.885381   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:35:38.903961   13952 cli_runner.go:164] Run: docker exec addons-030936 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:35:38.944508   13952 oci.go:144] the created container "addons-030936" has a running status.
	I1205 19:35:38.944560   13952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa...
	I1205 19:35:39.060109   13952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:35:39.080463   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:35:39.097110   13952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:35:39.097134   13952 kic_runner.go:114] Args: [docker exec --privileged addons-030936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:35:39.160595   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:35:39.176546   13952 machine.go:88] provisioning docker machine ...
	I1205 19:35:39.176594   13952 ubuntu.go:169] provisioning hostname "addons-030936"
	I1205 19:35:39.176650   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:39.195057   13952 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:39.195595   13952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:35:39.195619   13952 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-030936 && echo "addons-030936" | sudo tee /etc/hostname
	I1205 19:35:39.197298   13952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59078->127.0.0.1:32772: read: connection reset by peer
	I1205 19:35:42.338199   13952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-030936
	
	I1205 19:35:42.338270   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:42.354216   13952 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:42.354533   13952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:35:42.354551   13952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-030936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-030936/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-030936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:35:42.484433   13952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:42.484482   13952 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 19:35:42.484525   13952 ubuntu.go:177] setting up certificates
	I1205 19:35:42.484537   13952 provision.go:83] configureAuth start
	I1205 19:35:42.484611   13952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-030936
	I1205 19:35:42.500909   13952 provision.go:138] copyHostCerts
	I1205 19:35:42.500979   13952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 19:35:42.501099   13952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 19:35:42.501180   13952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 19:35:42.501259   13952 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.addons-030936 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-030936]
	I1205 19:35:42.630583   13952 provision.go:172] copyRemoteCerts
	I1205 19:35:42.630642   13952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:35:42.630672   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:42.647107   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:42.740371   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:35:42.761194   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:35:42.782525   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:35:42.803329   13952 provision.go:86] duration metric: configureAuth took 318.774147ms
	I1205 19:35:42.803359   13952 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:35:42.803541   13952 config.go:182] Loaded profile config "addons-030936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:35:42.803646   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:42.819895   13952 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:42.820347   13952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:35:42.820371   13952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:35:43.037157   13952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:35:43.037179   13952 machine.go:91] provisioned docker machine in 3.860610629s
	I1205 19:35:43.037189   13952 client.go:171] LocalClient.Create took 15.7986566s
	I1205 19:35:43.037212   13952 start.go:167] duration metric: libmachine.API.Create for "addons-030936" took 15.798710641s
	I1205 19:35:43.037221   13952 start.go:300] post-start starting for "addons-030936" (driver="docker")
	I1205 19:35:43.037245   13952 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:35:43.037303   13952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:35:43.037351   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.055135   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.148458   13952 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:35:43.151324   13952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:35:43.151353   13952 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:35:43.151363   13952 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:35:43.151371   13952 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:35:43.151391   13952 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 19:35:43.151452   13952 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 19:35:43.151479   13952 start.go:303] post-start completed in 114.252015ms
	I1205 19:35:43.151765   13952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-030936
	I1205 19:35:43.168111   13952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/config.json ...
	I1205 19:35:43.168417   13952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:35:43.168458   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.185306   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.276778   13952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:35:43.280704   13952 start.go:128] duration metric: createHost completed in 16.044575801s
	I1205 19:35:43.280726   13952 start.go:83] releasing machines lock for "addons-030936", held for 16.044699676s
	I1205 19:35:43.280780   13952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-030936
	I1205 19:35:43.297038   13952 ssh_runner.go:195] Run: cat /version.json
	I1205 19:35:43.297084   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.297110   13952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:35:43.297221   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.315551   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.315791   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.491466   13952 ssh_runner.go:195] Run: systemctl --version
	I1205 19:35:43.495353   13952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:35:43.629288   13952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:35:43.633336   13952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:35:43.650614   13952 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:35:43.650694   13952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:35:43.677728   13952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:35:43.677759   13952 start.go:475] detecting cgroup driver to use...
	I1205 19:35:43.677796   13952 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:35:43.677846   13952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:35:43.691277   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:35:43.701250   13952 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:35:43.701307   13952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:35:43.716033   13952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:35:43.728627   13952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:35:43.807803   13952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:35:43.887861   13952 docker.go:219] disabling docker service ...
	I1205 19:35:43.887927   13952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:35:43.904376   13952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:35:43.914244   13952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:35:43.987597   13952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:35:44.068092   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:35:44.077785   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:35:44.092210   13952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:35:44.092267   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.100653   13952 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:35:44.100725   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.109056   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.117643   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.125950   13952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:35:44.133747   13952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:35:44.140860   13952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:35:44.147907   13952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:35:44.222584   13952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:35:44.327204   13952 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:35:44.327268   13952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:35:44.330450   13952 start.go:543] Will wait 60s for crictl version
	I1205 19:35:44.330500   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:35:44.333477   13952 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:35:44.365760   13952 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:35:44.365864   13952 ssh_runner.go:195] Run: crio --version
	I1205 19:35:44.400670   13952 ssh_runner.go:195] Run: crio --version
	I1205 19:35:44.434144   13952 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 19:35:44.435563   13952 cli_runner.go:164] Run: docker network inspect addons-030936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:35:44.451318   13952 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:35:44.454583   13952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:44.463987   13952 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:44.464034   13952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:44.516312   13952 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:35:44.516333   13952 crio.go:415] Images already preloaded, skipping extraction
	I1205 19:35:44.516379   13952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:44.547427   13952 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:35:44.547449   13952 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:35:44.547505   13952 ssh_runner.go:195] Run: crio config
	I1205 19:35:44.585318   13952 cni.go:84] Creating CNI manager for ""
	I1205 19:35:44.585337   13952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:44.585358   13952 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:35:44.585383   13952 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-030936 NodeName:addons-030936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:35:44.585536   13952 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-030936"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:35:44.585613   13952 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-030936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:35:44.585668   13952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:35:44.593502   13952 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:35:44.593552   13952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:35:44.600602   13952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1205 19:35:44.614890   13952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:35:44.629508   13952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1205 19:35:44.645064   13952 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:35:44.647895   13952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:44.656744   13952 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936 for IP: 192.168.49.2
	I1205 19:35:44.656768   13952 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fbd7b27250f9a01d87d327232e4afd0539a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:44.656863   13952 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key
	I1205 19:35:44.943594   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt ...
	I1205 19:35:44.943628   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt: {Name:mkd05ad24bcb37acd20b4a8a593813ca81d33c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:44.943828   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key ...
	I1205 19:35:44.943843   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key: {Name:mk20b22277ba592e40f1366a895a8d85d6727858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:44.943935   13952 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key
	I1205 19:35:45.027333   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt ...
	I1205 19:35:45.027363   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt: {Name:mkcb4a75cc08c5d51336d952f946273cb8bfb8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.027557   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key ...
	I1205 19:35:45.027571   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key: {Name:mk7ab9bf29928ec0820d5b387e58e4d640f50ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.027698   13952 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.key
	I1205 19:35:45.027713   13952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt with IP's: []
	I1205 19:35:45.136599   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt ...
	I1205 19:35:45.136627   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: {Name:mk549b45e94a3213800e3bf739fc30aaf41137ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.136808   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.key ...
	I1205 19:35:45.136822   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.key: {Name:mk6ce73ee3cee48aaea77933cce9dbc2070f1feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.136910   13952 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2
	I1205 19:35:45.136929   13952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:35:45.416684   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2 ...
	I1205 19:35:45.416716   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2: {Name:mka5423a609e5d353fcf2781bc07f34009b7ddf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.416906   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2 ...
	I1205 19:35:45.416923   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2: {Name:mk0d07d5437f4e7279b33579f3008cf206aa6385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.417020   13952 certs.go:337] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt
	I1205 19:35:45.417092   13952 certs.go:341] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key
	I1205 19:35:45.417135   13952 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key
	I1205 19:35:45.417150   13952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt with IP's: []
	I1205 19:35:45.546894   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt ...
	I1205 19:35:45.546923   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt: {Name:mk9124eef00a7036c57f9f2e6af0f9d7a6374656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.547108   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key ...
	I1205 19:35:45.547129   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key: {Name:mkeaf51739a91a52ff0f836d5af9486da7395742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.547328   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:35:45.547362   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:35:45.547388   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:35:45.547420   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem (1679 bytes)
	I1205 19:35:45.548005   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:35:45.568705   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:35:45.588772   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:35:45.608805   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:35:45.628990   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:35:45.649132   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:35:45.669319   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:35:45.689205   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:35:45.710156   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:35:45.730948   13952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:35:45.746346   13952 ssh_runner.go:195] Run: openssl version
	I1205 19:35:45.750992   13952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:35:45.758948   13952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:45.761874   13952 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:45.761910   13952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:45.767835   13952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:35:45.776236   13952 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:35:45.778997   13952 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:35:45.779039   13952 kubeadm.go:404] StartCluster: {Name:addons-030936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:45.779098   13952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:35:45.779137   13952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:35:45.809722   13952 cri.go:89] found id: ""
	I1205 19:35:45.809778   13952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:35:45.817351   13952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:35:45.825483   13952 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:35:45.825523   13952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:35:45.832919   13952 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:35:45.832956   13952 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:35:45.907422   13952 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1205 19:35:45.966264   13952 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:35:54.544970   13952 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:35:54.545039   13952 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:35:54.545156   13952 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:35:54.545249   13952 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1205 19:35:54.545297   13952 kubeadm.go:322] OS: Linux
	I1205 19:35:54.545363   13952 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:35:54.545479   13952 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:35:54.545549   13952 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:35:54.545630   13952 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:35:54.545717   13952 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:35:54.545779   13952 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:35:54.545846   13952 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1205 19:35:54.545894   13952 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1205 19:35:54.545960   13952 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1205 19:35:54.546039   13952 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:35:54.546169   13952 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:35:54.546301   13952 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:35:54.546392   13952 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:35:54.548066   13952 out.go:204]   - Generating certificates and keys ...
	I1205 19:35:54.548148   13952 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:35:54.548259   13952 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:35:54.548358   13952 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:35:54.548432   13952 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:35:54.548508   13952 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:35:54.548570   13952 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:35:54.548660   13952 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:35:54.548825   13952 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-030936 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:35:54.548901   13952 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:35:54.549046   13952 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-030936 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:35:54.549138   13952 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:35:54.549228   13952 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:35:54.549288   13952 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:35:54.549365   13952 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:35:54.549422   13952 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:35:54.549469   13952 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:35:54.549530   13952 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:35:54.549575   13952 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:35:54.549648   13952 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:35:54.549701   13952 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:35:54.551362   13952 out.go:204]   - Booting up control plane ...
	I1205 19:35:54.551432   13952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:35:54.551527   13952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:35:54.551600   13952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:35:54.551713   13952 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:35:54.551788   13952 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:35:54.551821   13952 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:35:54.551954   13952 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:35:54.552020   13952 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002284 seconds
	I1205 19:35:54.552140   13952 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:35:54.552278   13952 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:35:54.552334   13952 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:35:54.552491   13952 kubeadm.go:322] [mark-control-plane] Marking the node addons-030936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:35:54.552542   13952 kubeadm.go:322] [bootstrap-token] Using token: wzh2ds.ktaesz4l7xwfj2en
	I1205 19:35:54.553848   13952 out.go:204]   - Configuring RBAC rules ...
	I1205 19:35:54.553960   13952 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:35:54.554057   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:35:54.554223   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:35:54.554379   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:35:54.554542   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:35:54.554680   13952 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:35:54.554839   13952 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:35:54.554879   13952 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:35:54.554923   13952 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:35:54.554930   13952 kubeadm.go:322] 
	I1205 19:35:54.554993   13952 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:35:54.555003   13952 kubeadm.go:322] 
	I1205 19:35:54.555076   13952 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:35:54.555085   13952 kubeadm.go:322] 
	I1205 19:35:54.555116   13952 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:35:54.555175   13952 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:35:54.555218   13952 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:35:54.555224   13952 kubeadm.go:322] 
	I1205 19:35:54.555272   13952 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:35:54.555281   13952 kubeadm.go:322] 
	I1205 19:35:54.555328   13952 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:35:54.555334   13952 kubeadm.go:322] 
	I1205 19:35:54.555375   13952 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:35:54.555438   13952 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:35:54.555502   13952 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:35:54.555508   13952 kubeadm.go:322] 
	I1205 19:35:54.555597   13952 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:35:54.555695   13952 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:35:54.555706   13952 kubeadm.go:322] 
	I1205 19:35:54.555813   13952 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wzh2ds.ktaesz4l7xwfj2en \
	I1205 19:35:54.555956   13952 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de \
	I1205 19:35:54.555989   13952 kubeadm.go:322] 	--control-plane 
	I1205 19:35:54.555998   13952 kubeadm.go:322] 
	I1205 19:35:54.556104   13952 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:35:54.556113   13952 kubeadm.go:322] 
	I1205 19:35:54.556241   13952 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wzh2ds.ktaesz4l7xwfj2en \
	I1205 19:35:54.556385   13952 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de 
	I1205 19:35:54.556398   13952 cni.go:84] Creating CNI manager for ""
	I1205 19:35:54.556410   13952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:54.557819   13952 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:35:54.558986   13952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:35:54.562381   13952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 19:35:54.562395   13952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:35:54.577410   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:35:55.178013   13952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:35:55.178136   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.178153   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=addons-030936 minikube.k8s.io/updated_at=2023_12_05T19_35_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.185098   13952 ops.go:34] apiserver oom_adj: -16
	I1205 19:35:55.255720   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.316886   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.879855   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:56.379991   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:56.879407   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:57.379915   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:57.879455   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:58.379497   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:58.879642   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.379000   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.879083   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.379959   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.879906   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.378981   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.879663   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.378980   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.879650   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.379783   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.879884   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.379478   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.879908   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.379727   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.879032   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.379866   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.879589   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.379775   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.450764   13952 kubeadm.go:1088] duration metric: took 12.272685303s to wait for elevateKubeSystemPrivileges.
	I1205 19:36:07.450794   13952 kubeadm.go:406] StartCluster complete in 21.671758877s
	I1205 19:36:07.450816   13952 settings.go:142] acquiring lock: {Name:mkfaf26f24f59aefb8a41893ed2faf05d01ae7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:07.450931   13952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:36:07.451355   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/kubeconfig: {Name:mk1f41ec1ae8a6de6a6de4f641695e135340252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:07.451533   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:36:07.451613   13952 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1205 19:36:07.451716   13952 addons.go:69] Setting volumesnapshots=true in profile "addons-030936"
	I1205 19:36:07.451725   13952 addons.go:69] Setting helm-tiller=true in profile "addons-030936"
	I1205 19:36:07.451740   13952 addons.go:69] Setting metrics-server=true in profile "addons-030936"
	I1205 19:36:07.451744   13952 config.go:182] Loaded profile config "addons-030936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:07.451747   13952 addons.go:231] Setting addon volumesnapshots=true in "addons-030936"
	I1205 19:36:07.451754   13952 addons.go:231] Setting addon helm-tiller=true in "addons-030936"
	I1205 19:36:07.451769   13952 addons.go:69] Setting inspektor-gadget=true in profile "addons-030936"
	I1205 19:36:07.451778   13952 addons.go:69] Setting ingress=true in profile "addons-030936"
	I1205 19:36:07.451781   13952 addons.go:69] Setting default-storageclass=true in profile "addons-030936"
	I1205 19:36:07.451779   13952 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-030936"
	I1205 19:36:07.451794   13952 addons.go:231] Setting addon ingress=true in "addons-030936"
	I1205 19:36:07.451796   13952 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-030936"
	I1205 19:36:07.451801   13952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-030936"
	I1205 19:36:07.451808   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451792   13952 addons.go:69] Setting cloud-spanner=true in profile "addons-030936"
	I1205 19:36:07.451819   13952 addons.go:69] Setting storage-provisioner=true in profile "addons-030936"
	I1205 19:36:07.451819   13952 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-030936"
	I1205 19:36:07.451829   13952 addons.go:231] Setting addon storage-provisioner=true in "addons-030936"
	I1205 19:36:07.451836   13952 addons.go:231] Setting addon cloud-spanner=true in "addons-030936"
	I1205 19:36:07.451838   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451847   13952 addons.go:69] Setting gcp-auth=true in profile "addons-030936"
	I1205 19:36:07.451848   13952 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-030936"
	I1205 19:36:07.451865   13952 mustload.go:65] Loading cluster: addons-030936
	I1205 19:36:07.451882   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451898   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451934   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451836   13952 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-030936"
	I1205 19:36:07.452004   13952 config.go:182] Loaded profile config "addons-030936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:07.452165   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452187   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452244   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452323   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452350   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452362   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451785   13952 addons.go:231] Setting addon inspektor-gadget=true in "addons-030936"
	I1205 19:36:07.452692   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452729   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451758   13952 addons.go:69] Setting ingress-dns=true in profile "addons-030936"
	I1205 19:36:07.452855   13952 addons.go:231] Setting addon ingress-dns=true in "addons-030936"
	I1205 19:36:07.452897   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.452362   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.453171   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.453338   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451809   13952 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-030936"
	I1205 19:36:07.455268   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.455713   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451758   13952 addons.go:231] Setting addon metrics-server=true in "addons-030936"
	I1205 19:36:07.457461   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.457898   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451808   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451791   13952 addons.go:69] Setting registry=true in profile "addons-030936"
	I1205 19:36:07.458882   13952 addons.go:231] Setting addon registry=true in "addons-030936"
	I1205 19:36:07.458957   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.459285   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.464461   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.495374   13952 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-030936" context rescaled to 1 replicas
	I1205 19:36:07.495423   13952 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:07.498546   13952 out.go:177] * Verifying Kubernetes components...
	I1205 19:36:07.497621   13952 addons.go:231] Setting addon default-storageclass=true in "addons-030936"
	I1205 19:36:07.502587   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1205 19:36:07.500898   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:36:07.500898   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.506818   13952 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:36:07.504870   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.510938   13952 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1205 19:36:07.512212   13952 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1205 19:36:07.513499   13952 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:36:07.508391   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:07.508396   13952 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1205 19:36:07.510877   13952 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-030936"
	I1205 19:36:07.508377   13952 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:07.512233   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1205 19:36:07.514956   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:36:07.514983   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.516122   13952 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1205 19:36:07.516180   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.520153   13952 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1205 19:36:07.522083   13952 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:07.523626   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:07.520276   13952 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1205 19:36:07.523660   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1205 19:36:07.523711   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.520490   13952 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1205 19:36:07.520771   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.520252   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.521941   13952 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:36:07.522103   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:36:07.523773   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1205 19:36:07.523834   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.526967   13952 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1205 19:36:07.526980   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:36:07.526988   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:36:07.527253   13952 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:07.527306   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.531455   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:36:07.531482   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:36:07.531541   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.533323   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.529453   13952 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:07.529510   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1205 19:36:07.540340   13952 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1205 19:36:07.540365   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:36:07.542076   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:36:07.542094   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:36:07.542140   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.542149   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.549249   13952 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:07.549276   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:36:07.549336   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.552318   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:36:07.542484   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.555426   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:36:07.556785   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:36:07.557754   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.560854   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:36:07.563371   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:36:07.565127   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:36:07.566581   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:36:07.568048   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:36:07.568071   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:36:07.568125   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.569496   13952 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:36:07.568331   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.570178   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.572756   13952 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:36:07.574608   13952 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:07.574624   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:36:07.574679   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.584005   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.591395   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.593060   13952 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:07.593078   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:36:07.593184   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.601820   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.602558   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.603671   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.609023   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.612189   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.613325   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.615205   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.615663   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.627525   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:36:07.628714   13952 node_ready.go:35] waiting up to 6m0s for node "addons-030936" to be "Ready" ...
	W1205 19:36:07.632407   13952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 19:36:07.632457   13952 retry.go:31] will retry after 273.42402ms: ssh: handshake failed: EOF
	I1205 19:36:07.830288   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:07.929696   13952 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:36:07.929729   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:36:07.933858   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:07.942331   13952 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1205 19:36:07.942363   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1205 19:36:08.044517   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:08.044792   13952 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1205 19:36:08.044853   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1205 19:36:08.045346   13952 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:08.045385   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:36:08.048297   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:08.124926   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:36:08.125008   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:36:08.126151   13952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:36:08.126176   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:36:08.127282   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:08.131218   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:36:08.131244   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:36:08.135344   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:08.237538   13952 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:08.237571   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1205 19:36:08.247265   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:08.327946   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:36:08.328024   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:36:08.334821   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:36:08.334847   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:36:08.339798   13952 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1205 19:36:08.339875   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1205 19:36:08.343195   13952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:36:08.343298   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:36:08.636628   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:08.638717   13952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:36:08.638785   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:36:08.649411   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:08.649441   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:36:08.725724   13952 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1205 19:36:08.725751   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1205 19:36:08.738447   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:36:08.738478   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:36:08.741944   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:08.936728   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:36:08.936751   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:36:08.944182   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:09.042504   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:36:09.042603   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:36:09.225163   13952 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1205 19:36:09.225256   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1205 19:36:09.526363   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:36:09.526451   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:36:09.546658   13952 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:09.546709   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:36:09.739263   13952 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1205 19:36:09.739352   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1205 19:36:09.748099   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:09.931023   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:09.944793   13952 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.317228086s)
	I1205 19:36:09.944830   13952 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:36:10.027316   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:36:10.027342   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:36:10.041285   13952 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:36:10.041313   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1205 19:36:10.332881   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:36:10.332910   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:36:10.544893   13952 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:10.544939   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1205 19:36:10.626180   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:36:10.626218   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:36:10.926839   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:36:10.926878   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:36:11.125881   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:11.139643   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:11.139676   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:36:11.332072   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:11.545454   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.715080653s)
	I1205 19:36:12.131426   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.197524858s)
	I1205 19:36:12.131606   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.087034758s)
	I1205 19:36:12.131886   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.083563514s)
	I1205 19:36:12.231187   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:13.841091   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.713773721s)
	I1205 19:36:13.841120   13952 addons.go:467] Verifying addon ingress=true in "addons-030936"
	I1205 19:36:13.843028   13952 out.go:177] * Verifying ingress addon...
	I1205 19:36:13.841203   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.705787911s)
	I1205 19:36:13.841260   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.593949702s)
	I1205 19:36:13.841306   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.204591211s)
	I1205 19:36:13.841475   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.09948472s)
	I1205 19:36:13.841545   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897278766s)
	I1205 19:36:13.841651   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.910544552s)
	I1205 19:36:13.841686   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.715770095s)
	I1205 19:36:13.843084   13952 addons.go:467] Verifying addon metrics-server=true in "addons-030936"
	W1205 19:36:13.843120   13952 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:13.844750   13952 retry.go:31] will retry after 303.061165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:13.843121   13952 addons.go:467] Verifying addon registry=true in "addons-030936"
	I1205 19:36:13.846361   13952 out.go:177] * Verifying registry addon...
	I1205 19:36:13.845478   13952 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:36:13.848607   13952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:36:13.852547   13952 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:36:13.852571   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:13.853627   13952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:13.853646   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:13.855575   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:13.856284   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:14.148230   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:14.341219   13952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:36:14.341308   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:14.360182   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:14.360482   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:14.362044   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:14.638429   13952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:36:14.732175   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:14.738099   13952 addons.go:231] Setting addon gcp-auth=true in "addons-030936"
	I1205 19:36:14.738156   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:14.738647   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:14.758604   13952 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:36:14.758655   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:14.775146   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:14.833092   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.500911168s)
	I1205 19:36:14.833134   13952 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-030936"
	I1205 19:36:14.835189   13952 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:36:14.837440   13952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:36:14.842910   13952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:14.842940   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:14.852072   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:14.859164   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:14.925491   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:15.427725   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:15.427805   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:15.428400   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:15.929617   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:15.930577   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:15.931401   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:16.430299   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:16.431318   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:16.432160   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:16.639334   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.49105348s)
	I1205 19:36:16.639348   13952 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.880711701s)
	I1205 19:36:16.641734   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:16.643796   13952 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1205 19:36:16.645602   13952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:36:16.645628   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:36:16.724870   13952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:36:16.724941   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:36:16.748296   13952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:16.748324   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1205 19:36:16.840353   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:16.856557   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:16.928679   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:16.929051   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:17.226347   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:17.358273   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:17.427531   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:17.428718   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:17.859577   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:17.861739   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:17.862516   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:18.431589   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:18.432628   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.592231697s)
	I1205 19:36:18.433737   13952 addons.go:467] Verifying addon gcp-auth=true in "addons-030936"
	I1205 19:36:18.435486   13952 out.go:177] * Verifying gcp-auth addon...
	I1205 19:36:18.437892   13952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:36:18.439950   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:18.440497   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:18.442225   13952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:36:18.442271   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:18.446697   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:18.856381   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:18.859278   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:18.860306   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:18.950522   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:19.357176   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:19.359361   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:19.359441   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:19.450416   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:19.656464   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:19.855775   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:19.858721   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:19.859260   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:19.950211   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:20.356446   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:20.358792   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.359418   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.450125   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:20.857275   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:20.859910   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.859971   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.949866   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:21.355478   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.359118   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.360085   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.449799   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:21.855959   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.859213   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.859493   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.950135   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:22.155842   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:22.356475   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.359008   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.359394   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.450113   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:22.855912   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.859198   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.859227   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.950019   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.356101   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.358990   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.359330   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.449691   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.855676   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.858676   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.859289   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.949928   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.355892   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.358898   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.359666   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.450565   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.657659   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:24.856375   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.858946   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.859079   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.949746   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.356872   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.359183   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.359196   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.449977   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.856359   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.858881   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.859094   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.949775   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.356896   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.359730   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.360041   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:26.450302   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.856341   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.858578   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:26.859001   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.949678   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.155196   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:27.356826   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.359072   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:27.359175   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.449846   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.856462   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.858817   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.859162   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:27.949971   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.356434   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.359218   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.359328   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.449733   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.855628   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.858361   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.859884   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.949382   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.155765   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:29.356190   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.359228   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.359646   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.450223   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.856432   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.859282   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.859289   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.949797   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.355744   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.358735   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.359319   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.450171   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.856625   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.859071   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.859274   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.949918   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.355705   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.358609   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.360145   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.449997   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.655886   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:31.856091   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.858802   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.859707   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.950451   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.356110   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.358857   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.359619   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.450075   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.855614   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.858518   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.860061   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.949912   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.355847   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.359046   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:33.359047   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.449927   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.857111   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.859462   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:33.859668   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.950529   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:34.156053   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:34.356755   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.359780   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:34.361574   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.450308   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:34.856542   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.859093   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:34.859375   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.950042   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.356598   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.359636   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.359722   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.450531   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.856829   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.859098   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.859163   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.949793   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.355636   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.358785   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.360018   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.449842   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.655266   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:36.855621   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.858635   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.860178   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.953615   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.356711   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.359575   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.359844   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.450027   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.856223   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.859036   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.859372   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.949911   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.356026   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.358681   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.359629   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.450139   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.655820   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:38.856488   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.858815   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.859186   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.949788   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.356487   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.359021   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.359093   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.449895   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.856910   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.859218   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.859382   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.950438   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.356557   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.359311   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.359348   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.454574   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.657354   13952 node_ready.go:49] node "addons-030936" has status "Ready":"True"
	I1205 19:36:40.657384   13952 node_ready.go:38] duration metric: took 33.028648818s waiting for node "addons-030936" to be "Ready" ...
	I1205 19:36:40.657397   13952 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:36:40.666589   13952 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:40.857077   13952 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:40.857102   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.860191   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.861143   13952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:40.861163   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.949572   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.360097   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.360257   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.425406   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.450376   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.857726   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.860634   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.860844   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.949955   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.357200   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.359720   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.360424   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.449985   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.685166   13952 pod_ready.go:102] pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:42.859175   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.860998   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.862322   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.950722   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.185880   13952 pod_ready.go:92] pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.185907   13952 pod_ready.go:81] duration metric: took 2.519289974s waiting for pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.185929   13952 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.190857   13952 pod_ready.go:92] pod "etcd-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.190880   13952 pod_ready.go:81] duration metric: took 4.943475ms waiting for pod "etcd-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.190893   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.195784   13952 pod_ready.go:92] pod "kube-apiserver-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.195805   13952 pod_ready.go:81] duration metric: took 4.90688ms waiting for pod "kube-apiserver-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.195818   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.200689   13952 pod_ready.go:92] pod "kube-controller-manager-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.200711   13952 pod_ready.go:81] duration metric: took 4.888204ms waiting for pod "kube-controller-manager-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.200722   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kp9gj" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.358211   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.359995   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.361729   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.450400   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.456622   13952 pod_ready.go:92] pod "kube-proxy-kp9gj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.456650   13952 pod_ready.go:81] duration metric: took 255.922458ms waiting for pod "kube-proxy-kp9gj" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.456659   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.856560   13952 pod_ready.go:92] pod "kube-scheduler-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.856590   13952 pod_ready.go:81] duration metric: took 399.925066ms waiting for pod "kube-scheduler-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.856601   13952 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.858740   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.860713   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.861203   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.949904   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.357077   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.359412   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.360767   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.450478   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.858590   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.859256   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.860080   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.949605   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.358536   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.359512   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.360764   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.450671   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.857805   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.860584   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.925707   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.950235   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.231698   13952 pod_ready.go:102] pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:46.430856   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.433411   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:46.434025   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.452969   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.858567   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.860481   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.861166   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:46.950759   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.358589   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.362122   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.362450   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.450227   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.857260   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.859383   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.860286   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.949719   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.163001   13952 pod_ready.go:92] pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:48.163028   13952 pod_ready.go:81] duration metric: took 4.306419505s waiting for pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:48.163039   13952 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:48.357540   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.359214   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.360179   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.450357   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.857387   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.859160   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.860677   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.950414   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.357588   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.360066   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.361435   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.450112   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.857300   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.859457   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.859656   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.950779   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.263848   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:50.358345   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.359611   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.360369   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.450295   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.938212   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.938426   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.939026   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.970067   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.358053   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.359574   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.360337   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.450614   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.926687   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.929719   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.930112   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.953723   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.331915   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:52.436873   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.438708   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.534796   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.539255   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.858888   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.859517   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.860664   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.950870   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.358205   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:53.360205   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.360527   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.450267   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.858643   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:53.865964   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.865987   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.951100   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.357974   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.362654   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.362686   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.450423   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.763709   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:54.858387   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.860119   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.861198   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.949766   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.358530   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.360265   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.361228   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.450256   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.858202   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.860084   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.860712   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.950429   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.357660   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.359942   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.360214   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:56.451237   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.763847   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:56.858313   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.862140   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:56.862708   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.950345   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.357824   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.359884   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.360365   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.449731   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.858399   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.928042   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.928080   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.951915   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:58.358312   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.359768   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.361037   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.450926   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:58.857259   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.859851   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.860635   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.950458   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.263568   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:59.357140   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.359206   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.360342   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.449928   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.857911   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.860030   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.860373   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.950317   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.357318   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.359352   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.360418   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.450095   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.859373   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.862931   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.863053   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.950213   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.357555   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.360383   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.360515   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.450206   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.763247   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:01.857354   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.859495   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.859540   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.949920   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.357402   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.359434   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.360385   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.450104   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.857116   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.861055   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.861096   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.950570   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.357518   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.358894   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.360928   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.450466   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.763331   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:03.857162   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.859398   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.860735   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.950066   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.433232   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.437946   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.438867   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.526931   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.931576   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.932493   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.934497   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.027706   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.357424   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.359332   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.360810   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.450702   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.763451   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:05.857617   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.860621   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.861517   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.950413   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.358330   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.360563   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.360764   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.450463   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.858058   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.860254   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.862162   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.950383   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.356814   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.359316   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.360857   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.450816   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.763736   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:07.858169   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.859358   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.860133   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.950828   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.357854   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.359357   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.360269   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.450097   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.861294   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.862156   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.864702   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.950206   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.357068   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.359119   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.360583   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.453106   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.857895   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.860287   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.862521   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.953607   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.264320   13952 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:10.264346   13952 pod_ready.go:81] duration metric: took 22.101299085s waiting for pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:10.264373   13952 pod_ready.go:38] duration metric: took 29.606962926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:10.264396   13952 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:37:10.264428   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:37:10.264491   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:37:10.342654   13952 cri.go:89] found id: "3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:10.342679   13952 cri.go:89] found id: ""
	I1205 19:37:10.342690   13952 logs.go:284] 1 containers: [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646]
	I1205 19:37:10.342742   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.346118   13952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:37:10.346191   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:37:10.358919   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.359844   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.360539   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.445053   13952 cri.go:89] found id: "722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:10.445080   13952 cri.go:89] found id: ""
	I1205 19:37:10.445089   13952 logs.go:284] 1 containers: [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5]
	I1205 19:37:10.445141   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.448474   13952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:37:10.448538   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:37:10.450851   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.542451   13952 cri.go:89] found id: "ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:10.542480   13952 cri.go:89] found id: ""
	I1205 19:37:10.542490   13952 logs.go:284] 1 containers: [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f]
	I1205 19:37:10.542543   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.546230   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:37:10.546304   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:37:10.640759   13952 cri.go:89] found id: "479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:10.640782   13952 cri.go:89] found id: ""
	I1205 19:37:10.640792   13952 logs.go:284] 1 containers: [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b]
	I1205 19:37:10.640840   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.644284   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:37:10.644383   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:37:10.746496   13952 cri.go:89] found id: "6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:10.746528   13952 cri.go:89] found id: ""
	I1205 19:37:10.746538   13952 logs.go:284] 1 containers: [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093]
	I1205 19:37:10.746591   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.750572   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:37:10.750650   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:37:10.834441   13952 cri.go:89] found id: "f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:10.834468   13952 cri.go:89] found id: ""
	I1205 19:37:10.834478   13952 logs.go:284] 1 containers: [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc]
	I1205 19:37:10.834533   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.838018   13952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:37:10.838081   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:37:10.859035   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.860168   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.860873   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.928557   13952 cri.go:89] found id: "6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:10.928586   13952 cri.go:89] found id: ""
	I1205 19:37:10.928596   13952 logs.go:284] 1 containers: [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4]
	I1205 19:37:10.928652   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.932263   13952 logs.go:123] Gathering logs for kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] ...
	I1205 19:37:10.932291   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:10.950515   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.980877   13952 logs.go:123] Gathering logs for etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] ...
	I1205 19:37:10.980913   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:11.075311   13952 logs.go:123] Gathering logs for coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] ...
	I1205 19:37:11.075364   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:11.159406   13952 logs.go:123] Gathering logs for kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] ...
	I1205 19:37:11.159440   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:11.249807   13952 logs.go:123] Gathering logs for kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] ...
	I1205 19:37:11.249833   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:11.358669   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.358925   13952 logs.go:123] Gathering logs for kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] ...
	I1205 19:37:11.358985   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:11.359860   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.360136   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.429967   13952 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:37:11.429999   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:37:11.450355   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.507699   13952 logs.go:123] Gathering logs for dmesg ...
	I1205 19:37:11.507733   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:37:11.535713   13952 logs.go:123] Gathering logs for container status ...
	I1205 19:37:11.535740   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:37:11.580965   13952 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:37:11.580998   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:37:11.762121   13952 logs.go:123] Gathering logs for kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] ...
	I1205 19:37:11.762163   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:11.842546   13952 logs.go:123] Gathering logs for kubelet ...
	I1205 19:37:11.842586   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:37:11.862592   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.863864   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.865161   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.951147   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.358525   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.360313   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.360425   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.450115   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.857391   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.859419   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.860882   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.950889   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.357424   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.359831   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.360131   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.449699   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.933842   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.938141   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.939868   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.030743   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.358491   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.427061   13952 kapi.go:107] duration metric: took 1m0.578449612s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:37:14.427287   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.437631   13952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:37:14.450830   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.526059   13952 api_server.go:72] duration metric: took 1m7.030601625s to wait for apiserver process to appear ...
	I1205 19:37:14.526089   13952 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:37:14.526126   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:37:14.526187   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:37:14.737387   13952 cri.go:89] found id: "3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:14.737415   13952 cri.go:89] found id: ""
	I1205 19:37:14.737437   13952 logs.go:284] 1 containers: [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646]
	I1205 19:37:14.737487   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:14.742634   13952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:37:14.742742   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:37:14.929001   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.929017   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.937007   13952 cri.go:89] found id: "722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:14.937035   13952 cri.go:89] found id: ""
	I1205 19:37:14.937045   13952 logs.go:284] 1 containers: [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5]
	I1205 19:37:14.937102   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:14.940963   13952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:37:14.941026   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:37:14.954544   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.142462   13952 cri.go:89] found id: "ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:15.142491   13952 cri.go:89] found id: ""
	I1205 19:37:15.142501   13952 logs.go:284] 1 containers: [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f]
	I1205 19:37:15.142562   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.146074   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:37:15.146142   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:37:15.327506   13952 cri.go:89] found id: "479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:15.327532   13952 cri.go:89] found id: ""
	I1205 19:37:15.327541   13952 logs.go:284] 1 containers: [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b]
	I1205 19:37:15.327589   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.331933   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:37:15.331995   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:37:15.358699   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.427780   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.445833   13952 cri.go:89] found id: "6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:15.445859   13952 cri.go:89] found id: ""
	I1205 19:37:15.445868   13952 logs.go:284] 1 containers: [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093]
	I1205 19:37:15.445919   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.449832   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.449942   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:37:15.449995   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:37:15.543218   13952 cri.go:89] found id: "f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:15.543294   13952 cri.go:89] found id: ""
	I1205 19:37:15.543309   13952 logs.go:284] 1 containers: [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc]
	I1205 19:37:15.543364   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.546899   13952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:37:15.546958   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:37:15.644182   13952 cri.go:89] found id: "6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:15.644285   13952 cri.go:89] found id: ""
	I1205 19:37:15.644298   13952 logs.go:284] 1 containers: [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4]
	I1205 19:37:15.644358   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.648312   13952 logs.go:123] Gathering logs for container status ...
	I1205 19:37:15.648335   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:37:15.750164   13952 logs.go:123] Gathering logs for kubelet ...
	I1205 19:37:15.750193   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:37:15.858393   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.860174   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.922401   13952 logs.go:123] Gathering logs for dmesg ...
	I1205 19:37:15.922451   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:37:15.936130   13952 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:37:15.936156   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:37:15.950094   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.084352   13952 logs.go:123] Gathering logs for coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] ...
	I1205 19:37:16.084382   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:16.155159   13952 logs.go:123] Gathering logs for kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] ...
	I1205 19:37:16.155195   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:16.229835   13952 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:37:16.229868   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:37:16.311136   13952 logs.go:123] Gathering logs for kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] ...
	I1205 19:37:16.311171   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:16.358700   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.359192   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.364960   13952 logs.go:123] Gathering logs for etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] ...
	I1205 19:37:16.364991   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:16.444776   13952 logs.go:123] Gathering logs for kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] ...
	I1205 19:37:16.444815   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:16.449909   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.485879   13952 logs.go:123] Gathering logs for kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] ...
	I1205 19:37:16.485911   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:16.564267   13952 logs.go:123] Gathering logs for kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] ...
	I1205 19:37:16.564302   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:16.857710   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.860683   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.950931   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.357943   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.361662   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.449707   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.858234   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.859496   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.950606   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.357361   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.359745   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.449800   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.857494   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.859743   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.950817   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.176767   13952 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:37:19.182696   13952 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:37:19.183980   13952 api_server.go:141] control plane version: v1.28.4
	I1205 19:37:19.184001   13952 api_server.go:131] duration metric: took 4.657906109s to wait for apiserver health ...
	I1205 19:37:19.184009   13952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:37:19.184029   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:37:19.184068   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:37:19.218186   13952 cri.go:89] found id: "3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:19.218213   13952 cri.go:89] found id: ""
	I1205 19:37:19.218222   13952 logs.go:284] 1 containers: [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646]
	I1205 19:37:19.218280   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.221499   13952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:37:19.221559   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:37:19.254582   13952 cri.go:89] found id: "722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:19.254616   13952 cri.go:89] found id: ""
	I1205 19:37:19.254627   13952 logs.go:284] 1 containers: [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5]
	I1205 19:37:19.254674   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.257838   13952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:37:19.257887   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:37:19.291757   13952 cri.go:89] found id: "ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:19.291779   13952 cri.go:89] found id: ""
	I1205 19:37:19.291789   13952 logs.go:284] 1 containers: [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f]
	I1205 19:37:19.291839   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.295767   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:37:19.295835   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:37:19.355474   13952 cri.go:89] found id: "479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:19.355500   13952 cri.go:89] found id: ""
	I1205 19:37:19.355510   13952 logs.go:284] 1 containers: [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b]
	I1205 19:37:19.355559   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.357374   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.358848   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.359155   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:37:19.359209   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:37:19.391940   13952 cri.go:89] found id: "6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:19.391967   13952 cri.go:89] found id: ""
	I1205 19:37:19.391978   13952 logs.go:284] 1 containers: [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093]
	I1205 19:37:19.392030   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.428497   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:37:19.428562   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:37:19.450849   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.527547   13952 cri.go:89] found id: "f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:19.527575   13952 cri.go:89] found id: ""
	I1205 19:37:19.527586   13952 logs.go:284] 1 containers: [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc]
	I1205 19:37:19.527640   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.531342   13952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:37:19.531395   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:37:19.633476   13952 cri.go:89] found id: "6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:19.633500   13952 cri.go:89] found id: ""
	I1205 19:37:19.633509   13952 logs.go:284] 1 containers: [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4]
	I1205 19:37:19.633564   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.637538   13952 logs.go:123] Gathering logs for dmesg ...
	I1205 19:37:19.637566   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:37:19.651036   13952 logs.go:123] Gathering logs for kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] ...
	I1205 19:37:19.651072   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:19.765819   13952 logs.go:123] Gathering logs for coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] ...
	I1205 19:37:19.765850   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:19.858265   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.859806   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.862301   13952 logs.go:123] Gathering logs for kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] ...
	I1205 19:37:19.862323   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:19.950108   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.976919   13952 logs.go:123] Gathering logs for container status ...
	I1205 19:37:19.976952   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:37:20.066936   13952 logs.go:123] Gathering logs for kubelet ...
	I1205 19:37:20.066966   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:37:20.201126   13952 logs.go:123] Gathering logs for etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] ...
	I1205 19:37:20.201161   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:20.251427   13952 logs.go:123] Gathering logs for kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] ...
	I1205 19:37:20.251466   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:20.291696   13952 logs.go:123] Gathering logs for kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] ...
	I1205 19:37:20.291725   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:20.331671   13952 logs.go:123] Gathering logs for kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] ...
	I1205 19:37:20.331699   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:20.357679   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.359705   13952 kapi.go:107] duration metric: took 1m6.514224456s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:37:20.365595   13952 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:37:20.365618   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:37:20.435429   13952 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:37:20.435469   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:37:20.449997   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.857483   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.950796   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.358017   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.450042   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.858241   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.950298   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.357519   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.450367   13952 kapi.go:107] duration metric: took 1m4.0124757s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:37:22.489507   13952 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-030936 cluster.
	I1205 19:37:22.624407   13952 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:37:22.646642   13952 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:37:22.857595   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.047420   13952 system_pods.go:59] 19 kube-system pods found
	I1205 19:37:23.047451   13952 system_pods.go:61] "coredns-5dd5756b68-cvgxt" [da64d584-8b3b-46ec-884f-57a0d22f1f0c] Running
	I1205 19:37:23.047455   13952 system_pods.go:61] "csi-hostpath-attacher-0" [d0f34f73-e182-4cd3-af1e-fdc87a1247fd] Running
	I1205 19:37:23.047459   13952 system_pods.go:61] "csi-hostpath-resizer-0" [ad7fe1de-e5d3-41a6-a669-5eb13661ece8] Running
	I1205 19:37:23.047466   13952 system_pods.go:61] "csi-hostpathplugin-299pr" [efebc474-fc37-42df-972e-611870fd272f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:23.047472   13952 system_pods.go:61] "etcd-addons-030936" [9084d1b2-7456-491d-ba13-81e119e50b8d] Running
	I1205 19:37:23.047478   13952 system_pods.go:61] "kindnet-b6nhd" [2863a3e1-2878-4b1f-b10e-c1f20e137d62] Running
	I1205 19:37:23.047482   13952 system_pods.go:61] "kube-apiserver-addons-030936" [16aa497e-3c1e-4dc3-a68d-03e471801572] Running
	I1205 19:37:23.047488   13952 system_pods.go:61] "kube-controller-manager-addons-030936" [a14860cb-50ff-49fa-a08b-ee3282939d60] Running
	I1205 19:37:23.047496   13952 system_pods.go:61] "kube-ingress-dns-minikube" [f6dfd03f-7966-4c37-89c4-e5a4a1c2e395] Running
	I1205 19:37:23.047500   13952 system_pods.go:61] "kube-proxy-kp9gj" [ef75f123-2e3d-4345-be48-46c46e8aa537] Running
	I1205 19:37:23.047507   13952 system_pods.go:61] "kube-scheduler-addons-030936" [23a4feca-cd28-4ed4-b9a6-85cc60e7843f] Running
	I1205 19:37:23.047511   13952 system_pods.go:61] "metrics-server-7c66d45ddc-8586h" [22718867-f984-4ef4-846c-45896c7a82bf] Running
	I1205 19:37:23.047517   13952 system_pods.go:61] "nvidia-device-plugin-daemonset-wnvvv" [78a4b26e-4608-4170-8a6a-de17b217468b] Running
	I1205 19:37:23.047521   13952 system_pods.go:61] "registry-hmgc4" [4f36e16b-74e5-4183-ae54-777afcc87dc9] Running
	I1205 19:37:23.047525   13952 system_pods.go:61] "registry-proxy-9wsfw" [23d952c3-eba0-4788-b241-d477ed5081a1] Running
	I1205 19:37:23.047529   13952 system_pods.go:61] "snapshot-controller-58dbcc7b99-gqmd7" [87f47f40-88ec-4064-8493-13ec94933413] Running
	I1205 19:37:23.047535   13952 system_pods.go:61] "snapshot-controller-58dbcc7b99-qhmfd" [8e7a0adb-b7df-409f-978d-28c4e57c2cfb] Running
	I1205 19:37:23.047539   13952 system_pods.go:61] "storage-provisioner" [ef82e7dd-313a-447e-84be-b95404c573a6] Running
	I1205 19:37:23.047545   13952 system_pods.go:61] "tiller-deploy-7b677967b9-cdtmt" [9203256c-9bc5-49b8-8ef1-47ca632955a8] Running
	I1205 19:37:23.047551   13952 system_pods.go:74] duration metric: took 3.863537598s to wait for pod list to return data ...
	I1205 19:37:23.047560   13952 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:37:23.049471   13952 default_sa.go:45] found service account: "default"
	I1205 19:37:23.049492   13952 default_sa.go:55] duration metric: took 1.923353ms for default service account to be created ...
	I1205 19:37:23.049501   13952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:37:23.057551   13952 system_pods.go:86] 19 kube-system pods found
	I1205 19:37:23.057579   13952 system_pods.go:89] "coredns-5dd5756b68-cvgxt" [da64d584-8b3b-46ec-884f-57a0d22f1f0c] Running
	I1205 19:37:23.057585   13952 system_pods.go:89] "csi-hostpath-attacher-0" [d0f34f73-e182-4cd3-af1e-fdc87a1247fd] Running
	I1205 19:37:23.057590   13952 system_pods.go:89] "csi-hostpath-resizer-0" [ad7fe1de-e5d3-41a6-a669-5eb13661ece8] Running
	I1205 19:37:23.057598   13952 system_pods.go:89] "csi-hostpathplugin-299pr" [efebc474-fc37-42df-972e-611870fd272f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:23.057604   13952 system_pods.go:89] "etcd-addons-030936" [9084d1b2-7456-491d-ba13-81e119e50b8d] Running
	I1205 19:37:23.057611   13952 system_pods.go:89] "kindnet-b6nhd" [2863a3e1-2878-4b1f-b10e-c1f20e137d62] Running
	I1205 19:37:23.057615   13952 system_pods.go:89] "kube-apiserver-addons-030936" [16aa497e-3c1e-4dc3-a68d-03e471801572] Running
	I1205 19:37:23.057622   13952 system_pods.go:89] "kube-controller-manager-addons-030936" [a14860cb-50ff-49fa-a08b-ee3282939d60] Running
	I1205 19:37:23.057627   13952 system_pods.go:89] "kube-ingress-dns-minikube" [f6dfd03f-7966-4c37-89c4-e5a4a1c2e395] Running
	I1205 19:37:23.057631   13952 system_pods.go:89] "kube-proxy-kp9gj" [ef75f123-2e3d-4345-be48-46c46e8aa537] Running
	I1205 19:37:23.057635   13952 system_pods.go:89] "kube-scheduler-addons-030936" [23a4feca-cd28-4ed4-b9a6-85cc60e7843f] Running
	I1205 19:37:23.057642   13952 system_pods.go:89] "metrics-server-7c66d45ddc-8586h" [22718867-f984-4ef4-846c-45896c7a82bf] Running
	I1205 19:37:23.057647   13952 system_pods.go:89] "nvidia-device-plugin-daemonset-wnvvv" [78a4b26e-4608-4170-8a6a-de17b217468b] Running
	I1205 19:37:23.057650   13952 system_pods.go:89] "registry-hmgc4" [4f36e16b-74e5-4183-ae54-777afcc87dc9] Running
	I1205 19:37:23.057654   13952 system_pods.go:89] "registry-proxy-9wsfw" [23d952c3-eba0-4788-b241-d477ed5081a1] Running
	I1205 19:37:23.057658   13952 system_pods.go:89] "snapshot-controller-58dbcc7b99-gqmd7" [87f47f40-88ec-4064-8493-13ec94933413] Running
	I1205 19:37:23.057664   13952 system_pods.go:89] "snapshot-controller-58dbcc7b99-qhmfd" [8e7a0adb-b7df-409f-978d-28c4e57c2cfb] Running
	I1205 19:37:23.057668   13952 system_pods.go:89] "storage-provisioner" [ef82e7dd-313a-447e-84be-b95404c573a6] Running
	I1205 19:37:23.057675   13952 system_pods.go:89] "tiller-deploy-7b677967b9-cdtmt" [9203256c-9bc5-49b8-8ef1-47ca632955a8] Running
	I1205 19:37:23.057680   13952 system_pods.go:126] duration metric: took 8.175097ms to wait for k8s-apps to be running ...
	I1205 19:37:23.057689   13952 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:37:23.057724   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:37:23.068162   13952 system_svc.go:56] duration metric: took 10.466551ms WaitForService to wait for kubelet.
	I1205 19:37:23.068183   13952 kubeadm.go:581] duration metric: took 1m15.572734214s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:37:23.068237   13952 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:37:23.070949   13952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:37:23.070981   13952 node_conditions.go:123] node cpu capacity is 8
	I1205 19:37:23.070998   13952 node_conditions.go:105] duration metric: took 2.752657ms to run NodePressure ...
	I1205 19:37:23.071014   13952 start.go:228] waiting for startup goroutines ...
	I1205 19:37:23.357684   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.856863   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.357836   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.857871   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.356548   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.857215   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.357633   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.856647   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.356707   13952 kapi.go:107] duration metric: took 1m12.519266151s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:37:27.358626   13952 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, inspektor-gadget, helm-tiller, ingress-dns, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1205 19:37:27.360502   13952 addons.go:502] enable addons completed in 1m19.908888017s: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass inspektor-gadget helm-tiller ingress-dns metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1205 19:37:27.360541   13952 start.go:233] waiting for cluster config update ...
	I1205 19:37:27.360560   13952 start.go:242] writing updated cluster config ...
	I1205 19:37:27.360814   13952 ssh_runner.go:195] Run: rm -f paused
	I1205 19:37:27.431274   13952 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 19:37:27.433050   13952 out.go:177] * Done! kubectl is now configured to use "addons-030936" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.214038691Z" level=info msg="Closing host port tcp:80"
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.214075115Z" level=info msg="Closing host port tcp:443"
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.215312028Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.215327265Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.215461610Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-q4wrk Namespace:ingress-nginx ID:7e90f85b55f0cbfa095091c768c7459c3acd1e2b03df9dd661861caeb8e092d7 UID:4eadd559-5bff-4a0f-811a-1bb9a6f0c907 NetNS:/var/run/netns/c7b00fe6-4571-441e-8e2e-41f46a749f0a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.215573322Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-q4wrk from CNI network \"kindnet\" (type=ptp)"
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.241632561Z" level=info msg="Stopped pod sandbox: 7e90f85b55f0cbfa095091c768c7459c3acd1e2b03df9dd661861caeb8e092d7" id=1e29ba62-bee8-43c9-9e5f-6aef13f4a152 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.361902655Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=5133ef95-11c9-4238-8186-df337dff383a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.362159360Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=5133ef95-11c9-4238-8186-df337dff383a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.362751750Z" level=info msg="Pulling image: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=78fed753-0d63-45dd-8217-c3c5fbe7d178 name=/runtime.v1.ImageService/PullImage
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.366932325Z" level=info msg="Trying to access \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931\""
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.463671890Z" level=info msg="Removing container: 796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773" id=f9dbd6f6-380c-4c83-9aa9-ad027755220a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.478622439Z" level=info msg="Removed container 796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773: ingress-nginx/ingress-nginx-controller-7c6974c4d8-q4wrk/controller" id=f9dbd6f6-380c-4c83-9aa9-ad027755220a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.566188429Z" level=info msg="Pulled image: ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce" id=78fed753-0d63-45dd-8217-c3c5fbe7d178 name=/runtime.v1.ImageService/PullImage
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.567090118Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=8a90c7b5-9596-4f6c-b072-315ea1ded19f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.567361738Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=8a90c7b5-9596-4f6c-b072-315ea1ded19f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.568272331Z" level=info msg="Creating container: gadget/gadget-q5tnq/gadget" id=7e8df5ca-8a1c-41b2-9a5c-7f506558aa10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.568365668Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:40:17 addons-030936 conmon[10504]: conmon e0ab4519b68d44d25403 <nwarn>: runtime stderr: time="2023-12-05T19:40:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                             time="2023-12-05T19:40:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                             time="2023-12-05T19:40:17Z" level=warning msg="lstat : no such file or directory"
	                                             time="2023-12-05T19:40:17Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:40:17 addons-030936 conmon[10504]: conmon e0ab4519b68d44d25403 <error>: Failed to create container: exit status 1
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.635929466Z" level=error msg="Container creation error: time=\"2023-12-05T19:40:17Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:40:17Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:40:17Z\" level=warning msg=\"lstat : no such file or directory\"\ntime=\"2023-12-05T19:40:17Z\" level=error msg=\"container_linux.go:380: starting container process caused: exec: \\\"/entrypoint.sh\\\": stat /entrypoint.sh: no such file or directory\"\n" id=7e8df5ca-8a1c-41b2-9a5c-7f506558aa10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.642230833Z" level=info msg="createCtr: deleting container ID e0ab4519b68d44d25403f08b050e9336754648a452aa931108c1306875f9b481 from idIndex" id=7e8df5ca-8a1c-41b2-9a5c-7f506558aa10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.642278265Z" level=info msg="createCtr: deleting container ID e0ab4519b68d44d25403f08b050e9336754648a452aa931108c1306875f9b481 from idIndex" id=7e8df5ca-8a1c-41b2-9a5c-7f506558aa10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.642293188Z" level=info msg="createCtr: deleting container ID e0ab4519b68d44d25403f08b050e9336754648a452aa931108c1306875f9b481 from idIndex" id=7e8df5ca-8a1c-41b2-9a5c-7f506558aa10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:40:17 addons-030936 crio[955]: time="2023-12-05 19:40:17.648340182Z" level=info msg="createCtr: deleting container ID e0ab4519b68d44d25403f08b050e9336754648a452aa931108c1306875f9b481 from idIndex" id=7e8df5ca-8a1c-41b2-9a5c-7f506558aa10 name=/runtime.v1.RuntimeService/CreateContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	643adf8bf1109       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   e360fe537bf0a       hello-world-app-5d77478584-b574q
	81e3d40cc89e5       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   2d05afc766095       nginx
	92618ec38fa1e       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   3e3827799bbaf       headlamp-777fd4b855-gcvsv
	7830c8ecdbb09       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   7a207a9e766e0       gcp-auth-d4c87556c-6cghg
	c3ce0f6577e65       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     1                   b7cacf5b6404d       ingress-nginx-admission-patch-6j2xg
	e07f5de38aef1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   2cc47019912cb       ingress-nginx-admission-create-2pb75
	97c31a99b9606       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   85996e6146e22       storage-provisioner
	ea7f13f6d2b33       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   b9bf4177cfd1a       coredns-5dd5756b68-cvgxt
	6531d8dc9c00c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   a575b45987803       kube-proxy-kp9gj
	6e4cc5dd757fe       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   045e5f0d0336a       kindnet-b6nhd
	479207e0ffc0b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   5ebb0d97c335b       kube-scheduler-addons-030936
	f50b81469d1cb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   bf809c50e7e80       kube-controller-manager-addons-030936
	3aa894543b63c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   65c1dff85fa93       kube-apiserver-addons-030936
	722b14928df5a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   0de43ad41b78f       etcd-addons-030936
	
	* 
	* ==> coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] <==
	* [INFO] 10.244.0.18:50509 - 57152 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059166s
	[INFO] 10.244.0.18:38337 - 3359 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005049456s
	[INFO] 10.244.0.18:38337 - 25376 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006085754s
	[INFO] 10.244.0.18:53415 - 41055 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004658712s
	[INFO] 10.244.0.18:53415 - 46242 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005264474s
	[INFO] 10.244.0.18:48949 - 54836 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003870071s
	[INFO] 10.244.0.18:48949 - 31025 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004819636s
	[INFO] 10.244.0.18:58581 - 19132 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089878s
	[INFO] 10.244.0.18:58581 - 33471 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142612s
	[INFO] 10.244.0.20:41385 - 1325 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201173s
	[INFO] 10.244.0.20:51386 - 59491 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194943s
	[INFO] 10.244.0.20:38674 - 35812 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000247835s
	[INFO] 10.244.0.20:44380 - 1506 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088745s
	[INFO] 10.244.0.20:47769 - 61015 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088264s
	[INFO] 10.244.0.20:35353 - 9871 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008909s
	[INFO] 10.244.0.20:40292 - 10634 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.004968684s
	[INFO] 10.244.0.20:47388 - 2747 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005156347s
	[INFO] 10.244.0.20:55893 - 7862 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005191727s
	[INFO] 10.244.0.20:34632 - 55165 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005614709s
	[INFO] 10.244.0.20:41508 - 35416 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005762919s
	[INFO] 10.244.0.20:44791 - 47553 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006706816s
	[INFO] 10.244.0.20:51267 - 37381 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000746103s
	[INFO] 10.244.0.20:37084 - 31011 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000807565s
	[INFO] 10.244.0.24:53856 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100728s
	[INFO] 10.244.0.24:41023 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077437s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-030936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-030936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=addons-030936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_35_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-030936
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:35:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-030936
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:40:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:38:58 +0000   Tue, 05 Dec 2023 19:35:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:38:58 +0000   Tue, 05 Dec 2023 19:35:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:38:58 +0000   Tue, 05 Dec 2023 19:35:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:38:58 +0000   Tue, 05 Dec 2023 19:36:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-030936
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 edda52c3b88241bba284156915f715bd
	  System UUID:                a0e5de66-5ed1-48da-a989-a4190bd59d70
	  Boot ID:                    cdc0538f-6890-4ebd-b17b-f40ba8f6605f
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-b574q         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gadget                      gadget-q5tnq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  gcp-auth                    gcp-auth-d4c87556c-6cghg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  headlamp                    headlamp-777fd4b855-gcvsv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-5dd5756b68-cvgxt                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m15s
	  kube-system                 etcd-addons-030936                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 kindnet-b6nhd                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m15s
	  kube-system                 kube-apiserver-addons-030936             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-controller-manager-addons-030936    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-proxy-kp9gj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-scheduler-addons-030936             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-030936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-030936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x8 over 4m34s)  kubelet          Node addons-030936 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s                  kubelet          Node addons-030936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s                  kubelet          Node addons-030936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s                  kubelet          Node addons-030936 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m16s                  node-controller  Node addons-030936 event: Registered Node addons-030936 in Controller
	  Normal  NodeReady                3m42s                  kubelet          Node addons-030936 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.008370] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004438] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000886] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000844] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000941] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001246] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.004793] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.002458] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.213124] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 19:38] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +1.031783] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +2.015837] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +4.191662] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +8.195417] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[ +16.122928] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[Dec 5 19:39] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	
	* 
	* ==> etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] <==
	* {"level":"info","ts":"2023-12-05T19:35:49.14835Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:35:49.148411Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:35:49.149102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-05T19:35:49.149177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T19:36:09.728641Z","caller":"traceutil/trace.go:171","msg":"trace[1335159935] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"182.848866ms","start":"2023-12-05T19:36:09.54577Z","end":"2023-12-05T19:36:09.728618Z","steps":["trace[1335159935] 'process raft request'  (duration: 182.742564ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:09.729668Z","caller":"traceutil/trace.go:171","msg":"trace[1553021988] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"101.142878ms","start":"2023-12-05T19:36:09.628513Z","end":"2023-12-05T19:36:09.729656Z","steps":["trace[1553021988] 'read index received'  (duration: 101.139132ms)","trace[1553021988] 'applied index is now lower than readState.Index'  (duration: 2.952µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T19:36:09.729745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.236923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T19:36:09.729935Z","caller":"traceutil/trace.go:171","msg":"trace[875543309] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:412; }","duration":"101.446146ms","start":"2023-12-05T19:36:09.628479Z","end":"2023-12-05T19:36:09.729925Z","steps":["trace[875543309] 'agreement among raft nodes before linearized reading'  (duration: 101.218748ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:10.343476Z","caller":"traceutil/trace.go:171","msg":"trace[208797160] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"101.061758ms","start":"2023-12-05T19:36:10.242393Z","end":"2023-12-05T19:36:10.343455Z","steps":["trace[208797160] 'process raft request'  (duration: 100.475919ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:36:10.625632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.916031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025614587148481 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3057 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T19:36:10.642466Z","caller":"traceutil/trace.go:171","msg":"trace[1937670736] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"295.240627ms","start":"2023-12-05T19:36:10.347205Z","end":"2023-12-05T19:36:10.642445Z","steps":["trace[1937670736] 'process raft request'  (duration: 82.113293ms)","trace[1937670736] 'compare'  (duration: 195.755606ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:36:10.64863Z","caller":"traceutil/trace.go:171","msg":"trace[261661201] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"222.127609ms","start":"2023-12-05T19:36:10.426491Z","end":"2023-12-05T19:36:10.648618Z","steps":["trace[261661201] 'process raft request'  (duration: 215.819341ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:10.648715Z","caller":"traceutil/trace.go:171","msg":"trace[519088703] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:430; }","duration":"107.481972ms","start":"2023-12-05T19:36:10.541227Z","end":"2023-12-05T19:36:10.648709Z","steps":["trace[519088703] 'read index received'  (duration: 31.761µs)","trace[519088703] 'applied index is now lower than readState.Index'  (duration: 107.449693ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:36:10.648847Z","caller":"traceutil/trace.go:171","msg":"trace[793529674] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"107.468793ms","start":"2023-12-05T19:36:10.541373Z","end":"2023-12-05T19:36:10.648842Z","steps":["trace[793529674] 'process raft request'  (duration: 107.041046ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:36:10.64899Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.776216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-kp9gj\" ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2023-12-05T19:36:10.649006Z","caller":"traceutil/trace.go:171","msg":"trace[246349267] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-kp9gj; range_end:; response_count:1; response_revision:422; }","duration":"107.804641ms","start":"2023-12-05T19:36:10.541197Z","end":"2023-12-05T19:36:10.649001Z","steps":["trace[246349267] 'agreement among raft nodes before linearized reading'  (duration: 107.756603ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.041773Z","caller":"traceutil/trace.go:171","msg":"trace[1613146233] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"297.968905ms","start":"2023-12-05T19:36:10.743778Z","end":"2023-12-05T19:36:11.041747Z","steps":["trace[1613146233] 'process raft request'  (duration: 290.31315ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.045571Z","caller":"traceutil/trace.go:171","msg":"trace[1179934501] linearizableReadLoop","detail":"{readStateIndex:438; appliedIndex:434; }","duration":"104.043543ms","start":"2023-12-05T19:36:10.941509Z","end":"2023-12-05T19:36:11.045553Z","steps":["trace[1179934501] 'read index received'  (duration: 92.591513ms)","trace[1179934501] 'applied index is now lower than readState.Index'  (duration: 11.45142ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:36:11.045789Z","caller":"traceutil/trace.go:171","msg":"trace[973044473] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"104.924509ms","start":"2023-12-05T19:36:10.940849Z","end":"2023-12-05T19:36:11.045773Z","steps":["trace[973044473] 'process raft request'  (duration: 104.523726ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.046004Z","caller":"traceutil/trace.go:171","msg":"trace[1715040571] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"104.693652ms","start":"2023-12-05T19:36:10.941299Z","end":"2023-12-05T19:36:11.045993Z","steps":["trace[1715040571] 'process raft request'  (duration: 104.175012ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.046201Z","caller":"traceutil/trace.go:171","msg":"trace[839748364] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"104.747107ms","start":"2023-12-05T19:36:10.941443Z","end":"2023-12-05T19:36:11.04619Z","steps":["trace[839748364] 'process raft request'  (duration: 104.073554ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:36:11.046326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.814607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T19:36:11.046352Z","caller":"traceutil/trace.go:171","msg":"trace[1795452498] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:426; }","duration":"104.853548ms","start":"2023-12-05T19:36:10.94149Z","end":"2023-12-05T19:36:11.046343Z","steps":["trace[1795452498] 'agreement among raft nodes before linearized reading'  (duration: 104.79636ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:51.123908Z","caller":"traceutil/trace.go:171","msg":"trace[1388172312] transaction","detail":"{read_only:false; response_revision:983; number_of_response:1; }","duration":"132.784617ms","start":"2023-12-05T19:36:50.991105Z","end":"2023-12-05T19:36:51.12389Z","steps":["trace[1388172312] 'process raft request'  (duration: 132.692754ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:37:39.559312Z","caller":"traceutil/trace.go:171","msg":"trace[1815491735] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"189.413062ms","start":"2023-12-05T19:37:39.369882Z","end":"2023-12-05T19:37:39.559295Z","steps":["trace[1815491735] 'process raft request'  (duration: 115.49124ms)","trace[1815491735] 'compare'  (duration: 73.806734ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [7830c8ecdbb0940c8ae8a1a8c94ad2811f0bba0b44bdd24d9b1c7844db1ac002] <==
	* 2023/12/05 19:37:21 GCP Auth Webhook started!
	2023/12/05 19:37:28 Ready to marshal response ...
	2023/12/05 19:37:28 Ready to write response ...
	2023/12/05 19:37:28 Ready to marshal response ...
	2023/12/05 19:37:28 Ready to write response ...
	2023/12/05 19:37:36 Ready to marshal response ...
	2023/12/05 19:37:36 Ready to write response ...
	2023/12/05 19:37:37 Ready to marshal response ...
	2023/12/05 19:37:37 Ready to write response ...
	2023/12/05 19:37:39 Ready to marshal response ...
	2023/12/05 19:37:39 Ready to write response ...
	2023/12/05 19:37:39 Ready to marshal response ...
	2023/12/05 19:37:39 Ready to write response ...
	2023/12/05 19:37:39 Ready to marshal response ...
	2023/12/05 19:37:39 Ready to write response ...
	2023/12/05 19:37:51 Ready to marshal response ...
	2023/12/05 19:37:51 Ready to write response ...
	2023/12/05 19:38:06 Ready to marshal response ...
	2023/12/05 19:38:06 Ready to write response ...
	2023/12/05 19:38:29 Ready to marshal response ...
	2023/12/05 19:38:29 Ready to write response ...
	2023/12/05 19:38:31 Ready to marshal response ...
	2023/12/05 19:38:31 Ready to write response ...
	2023/12/05 19:40:12 Ready to marshal response ...
	2023/12/05 19:40:12 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:40:22 up 22 min,  0 users,  load average: 0.17, 0.42, 0.23
	Linux addons-030936 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] <==
	* I1205 19:38:20.303308       1 main.go:227] handling current node
	I1205 19:38:30.315025       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:38:30.315045       1 main.go:227] handling current node
	I1205 19:38:40.318809       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:38:40.318834       1 main.go:227] handling current node
	I1205 19:38:50.330326       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:38:50.330353       1 main.go:227] handling current node
	I1205 19:39:00.339537       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:39:00.339559       1 main.go:227] handling current node
	I1205 19:39:10.351290       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:39:10.351315       1 main.go:227] handling current node
	I1205 19:39:20.375316       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:39:20.375651       1 main.go:227] handling current node
	I1205 19:39:30.383266       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:39:30.383299       1 main.go:227] handling current node
	I1205 19:39:40.386822       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:39:40.386843       1 main.go:227] handling current node
	I1205 19:39:50.398823       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:39:50.398845       1 main.go:227] handling current node
	I1205 19:40:00.402550       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:40:00.402580       1 main.go:227] handling current node
	I1205 19:40:10.411434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:40:10.411457       1 main.go:227] handling current node
	I1205 19:40:20.419419       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:40:20.419445       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] <==
	* I1205 19:37:51.345154       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:37:51.567652       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.134.83"}
	E1205 19:37:52.069717       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:38:17.624524       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1205 19:38:31.680374       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:34928: read: connection reset by peer
	I1205 19:38:47.563176       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.563241       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.569789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.569838       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.576516       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.576577       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.577427       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.577483       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.586690       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.586816       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.590877       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.590985       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.624968       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.625027       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.625050       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.625066       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:38:48.578467       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:38:48.625688       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:38:48.634765       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:40:12.375934       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.187.22"}
	
	* 
	* ==> kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] <==
	* W1205 19:39:08.442654       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:08.442682       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:39:30.330825       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:30.330855       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:39:30.453341       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:30.453375       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:39:30.779868       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:30.779901       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1205 19:40:12.219589       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1205 19:40:12.231727       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-b574q"
	I1205 19:40:12.237909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.359199ms"
	I1205 19:40:12.243475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.446865ms"
	I1205 19:40:12.243538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.003µs"
	I1205 19:40:12.248674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.041µs"
	I1205 19:40:14.044353       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1205 19:40:14.046147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="8.118µs"
	I1205 19:40:14.048606       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1205 19:40:14.470001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.379502ms"
	I1205 19:40:14.470122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.201µs"
	W1205 19:40:17.776785       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:40:17.776820       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:40:17.886656       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:40:17.886688       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:40:19.170553       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:40:19.170587       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] <==
	* I1205 19:36:10.835797       1 server_others.go:69] "Using iptables proxy"
	I1205 19:36:11.144455       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1205 19:36:12.044292       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:36:12.130017       1 server_others.go:152] "Using iptables Proxier"
	I1205 19:36:12.130064       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 19:36:12.130074       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 19:36:12.130109       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 19:36:12.130394       1 server.go:846] "Version info" version="v1.28.4"
	I1205 19:36:12.130407       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:36:12.132027       1 config.go:188] "Starting service config controller"
	I1205 19:36:12.132041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 19:36:12.132069       1 config.go:97] "Starting endpoint slice config controller"
	I1205 19:36:12.132074       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 19:36:12.132565       1 config.go:315] "Starting node config controller"
	I1205 19:36:12.132573       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 19:36:12.441519       1 shared_informer.go:318] Caches are synced for service config
	I1205 19:36:12.441647       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 19:36:12.536287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] <==
	* W1205 19:35:51.430660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:35:51.430736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 19:35:51.430607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:35:51.430797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 19:35:51.431444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:51.431464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:35:51.431483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:51.431509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:35:51.431568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:35:51.431697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:35:51.431770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:35:51.431844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 19:35:51.431796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:51.431874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:51.431802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:51.431893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:51.431804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:35:51.431909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:35:52.332558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:35:52.332591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:35:52.374732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:35:52.374764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:35:52.401938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:52.401970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1205 19:35:52.927227       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 05 19:40:14 addons-030936 kubelet[1565]: I1205 19:40:14.362886    1565 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f00321e8-40a6-4bdd-aed9-02fbe57b6826" path="/var/lib/kubelet/pods/f00321e8-40a6-4bdd-aed9-02fbe57b6826/volumes"
	Dec 05 19:40:14 addons-030936 kubelet[1565]: I1205 19:40:14.363196    1565 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f6dfd03f-7966-4c37-89c4-e5a4a1c2e395" path="/var/lib/kubelet/pods/f6dfd03f-7966-4c37-89c4-e5a4a1c2e395/volumes"
	Dec 05 19:40:14 addons-030936 kubelet[1565]: I1205 19:40:14.465009    1565 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-b574q" podStartSLOduration=1.290354796 podCreationTimestamp="2023-12-05 19:40:12 +0000 UTC" firstStartedPulling="2023-12-05 19:40:12.626184911 +0000 UTC m=+258.346763070" lastFinishedPulling="2023-12-05 19:40:13.800800439 +0000 UTC m=+259.521378585" observedRunningTime="2023-12-05 19:40:14.464532526 +0000 UTC m=+260.185110689" watchObservedRunningTime="2023-12-05 19:40:14.464970311 +0000 UTC m=+260.185548471"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.363997    1565 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4eadd559-5bff-4a0f-811a-1bb9a6f0c907-webhook-cert\") pod \"4eadd559-5bff-4a0f-811a-1bb9a6f0c907\" (UID: \"4eadd559-5bff-4a0f-811a-1bb9a6f0c907\") "
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.364123    1565 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8pqs\" (UniqueName: \"kubernetes.io/projected/4eadd559-5bff-4a0f-811a-1bb9a6f0c907-kube-api-access-g8pqs\") pod \"4eadd559-5bff-4a0f-811a-1bb9a6f0c907\" (UID: \"4eadd559-5bff-4a0f-811a-1bb9a6f0c907\") "
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.365992    1565 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eadd559-5bff-4a0f-811a-1bb9a6f0c907-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4eadd559-5bff-4a0f-811a-1bb9a6f0c907" (UID: "4eadd559-5bff-4a0f-811a-1bb9a6f0c907"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.366350    1565 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eadd559-5bff-4a0f-811a-1bb9a6f0c907-kube-api-access-g8pqs" (OuterVolumeSpecName: "kube-api-access-g8pqs") pod "4eadd559-5bff-4a0f-811a-1bb9a6f0c907" (UID: "4eadd559-5bff-4a0f-811a-1bb9a6f0c907"). InnerVolumeSpecName "kube-api-access-g8pqs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.462734    1565 scope.go:117] "RemoveContainer" containerID="796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.464910    1565 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4eadd559-5bff-4a0f-811a-1bb9a6f0c907-webhook-cert\") on node \"addons-030936\" DevicePath \"\""
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.464949    1565 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g8pqs\" (UniqueName: \"kubernetes.io/projected/4eadd559-5bff-4a0f-811a-1bb9a6f0c907-kube-api-access-g8pqs\") on node \"addons-030936\" DevicePath \"\""
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.479052    1565 scope.go:117] "RemoveContainer" containerID="796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: E1205 19:40:17.479454    1565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773\": container with ID starting with 796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773 not found: ID does not exist" containerID="796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: I1205 19:40:17.479501    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773"} err="failed to get container status \"796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773\": rpc error: code = NotFound desc = could not find container \"796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773\": container with ID starting with 796ce31d0daf6b3dae83fa7252eee6463ba4a058d6b2a59543647184df446773 not found: ID does not exist"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: E1205 19:40:17.648632    1565 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:40:17 addons-030936 kubelet[1565]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:40:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:17 addons-030936 kubelet[1565]:         time="2023-12-05T19:40:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:17 addons-030936 kubelet[1565]:         time="2023-12-05T19:40:17Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:40:17 addons-030936 kubelet[1565]:         time="2023-12-05T19:40:17Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:40:17 addons-030936 kubelet[1565]:  > podSandboxID="aa743815cf0743f063238fbeb6f27e1bd08cb1508cbafa26cd91e337fa4450c5"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: E1205 19:40:17.648793    1565 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4d5dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-q5tnq_gadget(56eb188a-c61d-4223-9714-57e2d393fe62): CreateContainerError: container create failed: time="2023-12-05T19:40:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: time="2023-12-05T19:40:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: time="2023-12-05T19:40:17Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: time="2023-12-05T19:40:17Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:40:17 addons-030936 kubelet[1565]: E1205 19:40:17.648836    1565 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:40:17Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:40:17Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:40:17Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:40:17Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-q5tnq" podUID="56eb188a-c61d-4223-9714-57e2d393fe62"
	Dec 05 19:40:18 addons-030936 kubelet[1565]: I1205 19:40:18.362659    1565 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4eadd559-5bff-4a0f-811a-1bb9a6f0c907" path="/var/lib/kubelet/pods/4eadd559-5bff-4a0f-811a-1bb9a6f0c907/volumes"
	
	* 
	* ==> storage-provisioner [97c31a99b960644c16a9d6c36d39c01727dcdbdb6383c9541b6393c7220480dc] <==
	* I1205 19:36:41.560515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:36:41.571533       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:36:41.571569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:36:41.578813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:36:41.578945       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-030936_197abb5a-49ba-4131-b25a-2420040b942d!
	I1205 19:36:41.579972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b3c664c-556d-482f-8994-26b925302f65", APIVersion:"v1", ResourceVersion:"899", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-030936_197abb5a-49ba-4131-b25a-2420040b942d became leader
	I1205 19:36:41.680114       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-030936_197abb5a-49ba-4131-b25a-2420040b942d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-030936 -n addons-030936
helpers_test.go:261: (dbg) Run:  kubectl --context addons-030936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-q5tnq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-030936 describe pod gadget-q5tnq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-030936 describe pod gadget-q5tnq: exit status 1 (63.454748ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-q5tnq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-030936 describe pod gadget-q5tnq: exit status 1
--- FAIL: TestAddons/parallel/Ingress (152.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (482.54s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-q5tnq" [56eb188a-c61d-4223-9714-57e2d393fe62] Pending / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:837: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-030936 -n addons-030936
addons_test.go:837: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-12-05 19:45:43.689804182 +0000 UTC m=+658.830345378
addons_test.go:837: (dbg) Run:  kubectl --context addons-030936 describe po gadget-q5tnq -n gadget
addons_test.go:837: (dbg) kubectl --context addons-030936 describe po gadget-q5tnq -n gadget:
Name:             gadget-q5tnq
Namespace:        gadget
Priority:         0
Service Account:  gadget
Node:             addons-030936/192.168.49.2
Start Time:       Tue, 05 Dec 2023 19:36:13 +0000
Labels:           controller-revision-hash=5d55b57d4c
k8s-app=gadget
pod-template-generation=1
Annotations:      container.apparmor.security.beta.kubernetes.io/gadget: unconfined
inspektor-gadget.kinvolk.io/option-hook-mode: auto
Status:           Pending
IP:               192.168.49.2
IPs:
IP:           192.168.49.2
Controlled By:  DaemonSet/gadget
Containers:
gadget:
Container ID:  
Image:         ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/entrypoint.sh
State:          Waiting
Reason:       CreateContainerError
Ready:          False
Restart Count:  0
Liveness:       exec [/bin/gadgettracermanager -liveness] delay=0s timeout=2s period=5s #success=1 #failure=3
Readiness:      exec [/bin/gadgettracermanager -liveness] delay=0s timeout=2s period=5s #success=1 #failure=3
Environment:
NODE_NAME:                                       (v1:spec.nodeName)
GADGET_POD_UID:                                  (v1:metadata.uid)
TRACELOOP_NODE_NAME:                             (v1:spec.nodeName)
TRACELOOP_POD_NAME:                             gadget-q5tnq (v1:metadata.name)
TRACELOOP_POD_NAMESPACE:                        gadget (v1:metadata.namespace)
GADGET_IMAGE:                                   ghcr.io/inspektor-gadget/inspektor-gadget
INSPEKTOR_GADGET_VERSION:                       v0.16.1
INSPEKTOR_GADGET_OPTION_HOOK_MODE:              auto
INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER:  true
INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH:         /run/containerd/containerd.sock
INSPEKTOR_GADGET_CRIO_SOCKETPATH:               /run/crio/crio.sock
INSPEKTOR_GADGET_DOCKER_SOCKETPATH:             /run/docker.sock
HOST_ROOT:                                      /host
Mounts:
/host from host (rw)
/lib/modules from modules (rw)
/run from run (rw)
/sys/fs/bpf from bpffs (rw)
/sys/fs/cgroup from cgroup (rw)
/sys/kernel/debug from debugfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4d5dz (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
host:
Type:          HostPath (bare host directory volume)
Path:          /
HostPathType:  
run:
Type:          HostPath (bare host directory volume)
Path:          /run
HostPathType:  
cgroup:
Type:          HostPath (bare host directory volume)
Path:          /sys/fs/cgroup
HostPathType:  
modules:
Type:          HostPath (bare host directory volume)
Path:          /lib/modules
HostPathType:  
bpffs:
Type:          HostPath (bare host directory volume)
Path:          /sys/fs/bpf
HostPathType:  
debugfs:
Type:          HostPath (bare host directory volume)
Path:          /sys/kernel/debug
HostPathType:  
kube-api-access-4d5dz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
:NoExecute op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  9m30s  default-scheduler  Successfully assigned gadget/gadget-q5tnq to addons-030936
Normal   Pulled     9m24s  kubelet            Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 5.888s (5.888s including waiting)
Warning  Failed     9m24s  kubelet            Error: container create failed: time="2023-12-05T19:36:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:19Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:36:19Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  9m23s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 153ms (153ms including waiting)
Warning  Failed  9m23s  kubelet  Error: container create failed: time="2023-12-05T19:36:20Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:20Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:20Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:36:20Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Warning  Failed  9m11s  kubelet  Error: container create failed: time="2023-12-05T19:36:32Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:32Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:32Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:36:32Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  9m11s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 281ms (281ms including waiting)
Normal   Pulled  8m30s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 149ms (26.715s including waiting)
Warning  Failed  8m30s  kubelet  Error: container create failed: time="2023-12-05T19:37:13Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:13Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:13Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:13Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  8m16s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 182ms (182ms including waiting)
Warning  Failed  8m16s  kubelet  Error: container create failed: time="2023-12-05T19:37:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:27Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:27Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  8m3s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 166ms (1.261s including waiting)
Warning  Failed  8m3s  kubelet  Error: container create failed: time="2023-12-05T19:37:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:40Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:40Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  7m49s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 280ms (280ms including waiting)
Warning  Failed  7m49s  kubelet  Error: container create failed: time="2023-12-05T19:37:54Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:54Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:54Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:54Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  7m32s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 212ms (3.766s including waiting)
Warning  Failed  7m32s  kubelet  Error: container create failed: time="2023-12-05T19:38:11Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:11Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:11Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:11Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal  Pulling  4m19s (x22 over 9m30s)  kubelet  Pulling image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931"
addons_test.go:837: (dbg) Run:  kubectl --context addons-030936 logs gadget-q5tnq -n gadget
addons_test.go:837: (dbg) Non-zero exit: kubectl --context addons-030936 logs gadget-q5tnq -n gadget: exit status 1 (70.908244ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gadget" in pod "gadget-q5tnq" is waiting to start: CreateContainerError

                                                
                                                
** /stderr **
addons_test.go:837: kubectl --context addons-030936 logs gadget-q5tnq -n gadget: exit status 1
addons_test.go:838: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-030936
helpers_test.go:235: (dbg) docker inspect addons-030936:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b",
	        "Created": "2023-12-05T19:35:38.531584162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:35:38.85870844Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:87b04fa850a730e5ca832acdf82e6994855a857f2c65a1e9dfd36c86f13b161b",
	        "ResolvConfPath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/hosts",
	        "LogPath": "/var/lib/docker/containers/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b/d8edc52e4b3cb1eb27e7f0018b587530288794253ec202481f3659057a786e0b-json.log",
	        "Name": "/addons-030936",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-030936:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-030936",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf-init/diff:/var/lib/docker/overlay2/8cb0dc756d42dafb4250d739248baa62eaad1aada62df117f76ff2e087cad9b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc821b8f9545652967106a5c6f8259265d887bbbe0eb8fe1a2db4ed4b778b4cf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-030936",
	                "Source": "/var/lib/docker/volumes/addons-030936/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-030936",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-030936",
	                "name.minikube.sigs.k8s.io": "addons-030936",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcf3dd2c351f47a3d797a0c5c53111895392c7483ad65d6cb7e5a691dde8a064",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fcf3dd2c351f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-030936": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d8edc52e4b3c",
	                        "addons-030936"
	                    ],
	                    "NetworkID": "93543a5ccc9738ba72bc1f7a0af74a705b7fbe3a0583a577dc8b0d1ca5a409a8",
	                    "EndpointID": "484985b822a232480e61c2bc20afa4c3d3d8a6040b0eb69aad112b6ff36d5767",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-030936 -n addons-030936
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-030936 logs -n 25: (1.209249863s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-428164                                                                     | download-only-428164   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-428164                                                                     | download-only-428164   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-383682 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | download-docker-383682                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-383682                                                                   | download-docker-383682 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-319231   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | binary-mirror-319231                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32971                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-319231                                                                     | binary-mirror-319231   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-030936                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-030936                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-030936 --wait=true                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-030936 addons                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-030936 ssh cat                                                                       | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | /opt/local-path-provisioner/pvc-c0670ccc-a245-46b9-8552-084bf6aa50cf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | addons-030936                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | -p addons-030936                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-030936 ip                                                                            | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-030936 ssh curl -s                                                                   | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | -p addons-030936                                                                            |                        |         |         |                     |                     |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-030936 addons                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-030936 addons                                                                        | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-030936 ip                                                                            | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-030936 addons disable                                                                | addons-030936          | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:15.725024   13952 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:15.725154   13952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:15.725162   13952 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:15.725166   13952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:15.725350   13952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 19:35:15.725960   13952 out.go:303] Setting JSON to false
	I1205 19:35:15.726755   13952 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1068,"bootTime":1701803848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:15.726815   13952 start.go:138] virtualization: kvm guest
	I1205 19:35:15.729414   13952 out.go:177] * [addons-030936] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:15.731051   13952 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:35:15.732594   13952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:15.731058   13952 notify.go:220] Checking for updates...
	I1205 19:35:15.734341   13952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:35:15.735934   13952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:35:15.737432   13952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:35:15.739036   13952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:15.740630   13952 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:35:15.760271   13952 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:15.760388   13952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:15.812013   13952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:35:15.803129407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:35:15.812112   13952 docker.go:295] overlay module found
	I1205 19:35:15.814181   13952 out.go:177] * Using the docker driver based on user configuration
	I1205 19:35:15.815818   13952 start.go:298] selected driver: docker
	I1205 19:35:15.815830   13952 start.go:902] validating driver "docker" against <nil>
	I1205 19:35:15.815840   13952 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:15.816633   13952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:15.865741   13952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:35:15.8579075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:35:15.865889   13952 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:35:15.866107   13952 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:15.868310   13952 out.go:177] * Using Docker driver with root privileges
	I1205 19:35:15.870127   13952 cni.go:84] Creating CNI manager for ""
	I1205 19:35:15.870149   13952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:15.870159   13952 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:35:15.870169   13952 start_flags.go:323] config:
	{Name:addons-030936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:15.871951   13952 out.go:177] * Starting control plane node addons-030936 in cluster addons-030936
	I1205 19:35:15.873285   13952 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:15.874705   13952 out.go:177] * Pulling base image ...
	I1205 19:35:15.875952   13952 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:15.875990   13952 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:15.876003   13952 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:15.876057   13952 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:15.876118   13952 preload.go:174] Found /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:35:15.876132   13952 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:35:15.876523   13952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/config.json ...
	I1205 19:35:15.876551   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/config.json: {Name:mk6feeae17388382e4bfff44f115f9965b601900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:15.891146   13952 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:15.891258   13952 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:15.891274   13952 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:15.891278   13952 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:15.891293   13952 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:15.891300   13952 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from local cache
	I1205 19:35:27.235843   13952 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from cached tarball
	I1205 19:35:27.235886   13952 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:35:27.235915   13952 start.go:365] acquiring machines lock for addons-030936: {Name:mk83ff218c25043d0e306eee7870b5366e64c5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:27.236015   13952 start.go:369] acquired machines lock for "addons-030936" in 81.288µs
	I1205 19:35:27.236039   13952 start.go:93] Provisioning new machine with config: &{Name:addons-030936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:35:27.236116   13952 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:35:27.238279   13952 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 19:35:27.238501   13952 start.go:159] libmachine.API.Create for "addons-030936" (driver="docker")
	I1205 19:35:27.238527   13952 client.go:168] LocalClient.Create starting
	I1205 19:35:27.238613   13952 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem
	I1205 19:35:27.336238   13952 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem
	I1205 19:35:27.519096   13952 cli_runner.go:164] Run: docker network inspect addons-030936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:35:27.534392   13952 cli_runner.go:211] docker network inspect addons-030936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:35:27.534467   13952 network_create.go:281] running [docker network inspect addons-030936] to gather additional debugging logs...
	I1205 19:35:27.534490   13952 cli_runner.go:164] Run: docker network inspect addons-030936
	W1205 19:35:27.548744   13952 cli_runner.go:211] docker network inspect addons-030936 returned with exit code 1
	I1205 19:35:27.548769   13952 network_create.go:284] error running [docker network inspect addons-030936]: docker network inspect addons-030936: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-030936 not found
	I1205 19:35:27.548780   13952 network_create.go:286] output of [docker network inspect addons-030936]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-030936 not found
	
	** /stderr **
	I1205 19:35:27.548874   13952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:35:27.564142   13952 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002cdb4d0}
	I1205 19:35:27.564181   13952 network_create.go:124] attempt to create docker network addons-030936 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:35:27.564248   13952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-030936 addons-030936
	I1205 19:35:27.876420   13952 network_create.go:108] docker network addons-030936 192.168.49.0/24 created
	I1205 19:35:27.876449   13952 kic.go:121] calculated static IP "192.168.49.2" for the "addons-030936" container
	I1205 19:35:27.876497   13952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:35:27.890973   13952 cli_runner.go:164] Run: docker volume create addons-030936 --label name.minikube.sigs.k8s.io=addons-030936 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:35:27.993659   13952 oci.go:103] Successfully created a docker volume addons-030936
	I1205 19:35:27.993748   13952 cli_runner.go:164] Run: docker run --rm --name addons-030936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-030936 --entrypoint /usr/bin/test -v addons-030936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:35:33.284270   13952 cli_runner.go:217] Completed: docker run --rm --name addons-030936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-030936 --entrypoint /usr/bin/test -v addons-030936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (5.29047996s)
	I1205 19:35:33.284296   13952 oci.go:107] Successfully prepared a docker volume addons-030936
	I1205 19:35:33.284314   13952 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:33.284331   13952 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:35:33.284374   13952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-030936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:35:38.464120   13952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-030936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.179701172s)
	I1205 19:35:38.464150   13952 kic.go:203] duration metric: took 5.179814 seconds to extract preloaded images to volume
	W1205 19:35:38.464296   13952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:35:38.464381   13952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:35:38.516514   13952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-030936 --name addons-030936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-030936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-030936 --network addons-030936 --ip 192.168.49.2 --volume addons-030936:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:35:38.867220   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Running}}
	I1205 19:35:38.885381   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:35:38.903961   13952 cli_runner.go:164] Run: docker exec addons-030936 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:35:38.944508   13952 oci.go:144] the created container "addons-030936" has a running status.
	I1205 19:35:38.944560   13952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa...
	I1205 19:35:39.060109   13952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:35:39.080463   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:35:39.097110   13952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:35:39.097134   13952 kic_runner.go:114] Args: [docker exec --privileged addons-030936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:35:39.160595   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:35:39.176546   13952 machine.go:88] provisioning docker machine ...
	I1205 19:35:39.176594   13952 ubuntu.go:169] provisioning hostname "addons-030936"
	I1205 19:35:39.176650   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:39.195057   13952 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:39.195595   13952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:35:39.195619   13952 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-030936 && echo "addons-030936" | sudo tee /etc/hostname
	I1205 19:35:39.197298   13952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59078->127.0.0.1:32772: read: connection reset by peer
	I1205 19:35:42.338199   13952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-030936
	
	I1205 19:35:42.338270   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:42.354216   13952 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:42.354533   13952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:35:42.354551   13952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-030936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-030936/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-030936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:35:42.484433   13952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:42.484482   13952 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 19:35:42.484525   13952 ubuntu.go:177] setting up certificates
	I1205 19:35:42.484537   13952 provision.go:83] configureAuth start
	I1205 19:35:42.484611   13952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-030936
	I1205 19:35:42.500909   13952 provision.go:138] copyHostCerts
	I1205 19:35:42.500979   13952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 19:35:42.501099   13952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 19:35:42.501180   13952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 19:35:42.501259   13952 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.addons-030936 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-030936]
	I1205 19:35:42.630583   13952 provision.go:172] copyRemoteCerts
	I1205 19:35:42.630642   13952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:35:42.630672   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:42.647107   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:42.740371   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:35:42.761194   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:35:42.782525   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:35:42.803329   13952 provision.go:86] duration metric: configureAuth took 318.774147ms
	I1205 19:35:42.803359   13952 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:35:42.803541   13952 config.go:182] Loaded profile config "addons-030936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:35:42.803646   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:42.819895   13952 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:42.820347   13952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:35:42.820371   13952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:35:43.037157   13952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:35:43.037179   13952 machine.go:91] provisioned docker machine in 3.860610629s
	I1205 19:35:43.037189   13952 client.go:171] LocalClient.Create took 15.7986566s
	I1205 19:35:43.037212   13952 start.go:167] duration metric: libmachine.API.Create for "addons-030936" took 15.798710641s
	I1205 19:35:43.037221   13952 start.go:300] post-start starting for "addons-030936" (driver="docker")
	I1205 19:35:43.037245   13952 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:35:43.037303   13952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:35:43.037351   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.055135   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.148458   13952 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:35:43.151324   13952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:35:43.151353   13952 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:35:43.151363   13952 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:35:43.151371   13952 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:35:43.151391   13952 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 19:35:43.151452   13952 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 19:35:43.151479   13952 start.go:303] post-start completed in 114.252015ms
	I1205 19:35:43.151765   13952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-030936
	I1205 19:35:43.168111   13952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/config.json ...
	I1205 19:35:43.168417   13952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:35:43.168458   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.185306   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.276778   13952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:35:43.280704   13952 start.go:128] duration metric: createHost completed in 16.044575801s
	I1205 19:35:43.280726   13952 start.go:83] releasing machines lock for "addons-030936", held for 16.044699676s
	I1205 19:35:43.280780   13952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-030936
	I1205 19:35:43.297038   13952 ssh_runner.go:195] Run: cat /version.json
	I1205 19:35:43.297084   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.297110   13952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:35:43.297221   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:35:43.315551   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.315791   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:35:43.491466   13952 ssh_runner.go:195] Run: systemctl --version
	I1205 19:35:43.495353   13952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:35:43.629288   13952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:35:43.633336   13952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:35:43.650614   13952 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:35:43.650694   13952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:35:43.677728   13952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:35:43.677759   13952 start.go:475] detecting cgroup driver to use...
	I1205 19:35:43.677796   13952 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:35:43.677846   13952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:35:43.691277   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:35:43.701250   13952 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:35:43.701307   13952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:35:43.716033   13952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:35:43.728627   13952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:35:43.807803   13952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:35:43.887861   13952 docker.go:219] disabling docker service ...
	I1205 19:35:43.887927   13952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:35:43.904376   13952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:35:43.914244   13952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:35:43.987597   13952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:35:44.068092   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:35:44.077785   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:35:44.092210   13952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:35:44.092267   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.100653   13952 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:35:44.100725   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.109056   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.117643   13952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:44.125950   13952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:35:44.133747   13952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:35:44.140860   13952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:35:44.147907   13952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:35:44.222584   13952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:35:44.327204   13952 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:35:44.327268   13952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:35:44.330450   13952 start.go:543] Will wait 60s for crictl version
	I1205 19:35:44.330500   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:35:44.333477   13952 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:35:44.365760   13952 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:35:44.365864   13952 ssh_runner.go:195] Run: crio --version
	I1205 19:35:44.400670   13952 ssh_runner.go:195] Run: crio --version
	I1205 19:35:44.434144   13952 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 19:35:44.435563   13952 cli_runner.go:164] Run: docker network inspect addons-030936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:35:44.451318   13952 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:35:44.454583   13952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:44.463987   13952 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:44.464034   13952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:44.516312   13952 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:35:44.516333   13952 crio.go:415] Images already preloaded, skipping extraction
	I1205 19:35:44.516379   13952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:44.547427   13952 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:35:44.547449   13952 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:35:44.547505   13952 ssh_runner.go:195] Run: crio config
	I1205 19:35:44.585318   13952 cni.go:84] Creating CNI manager for ""
	I1205 19:35:44.585337   13952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:44.585358   13952 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:35:44.585383   13952 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-030936 NodeName:addons-030936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:35:44.585536   13952 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-030936"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:35:44.585613   13952 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-030936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:35:44.585668   13952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:35:44.593502   13952 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:35:44.593552   13952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:35:44.600602   13952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1205 19:35:44.614890   13952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:35:44.629508   13952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1205 19:35:44.645064   13952 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:35:44.647895   13952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:44.656744   13952 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936 for IP: 192.168.49.2
	I1205 19:35:44.656768   13952 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fbd7b27250f9a01d87d327232e4afd0539a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:44.656863   13952 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key
	I1205 19:35:44.943594   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt ...
	I1205 19:35:44.943628   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt: {Name:mkd05ad24bcb37acd20b4a8a593813ca81d33c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:44.943828   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key ...
	I1205 19:35:44.943843   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key: {Name:mk20b22277ba592e40f1366a895a8d85d6727858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:44.943935   13952 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key
	I1205 19:35:45.027333   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt ...
	I1205 19:35:45.027363   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt: {Name:mkcb4a75cc08c5d51336d952f946273cb8bfb8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.027557   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key ...
	I1205 19:35:45.027571   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key: {Name:mk7ab9bf29928ec0820d5b387e58e4d640f50ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.027698   13952 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.key
	I1205 19:35:45.027713   13952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt with IP's: []
	I1205 19:35:45.136599   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt ...
	I1205 19:35:45.136627   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: {Name:mk549b45e94a3213800e3bf739fc30aaf41137ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.136808   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.key ...
	I1205 19:35:45.136822   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.key: {Name:mk6ce73ee3cee48aaea77933cce9dbc2070f1feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.136910   13952 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2
	I1205 19:35:45.136929   13952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:35:45.416684   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2 ...
	I1205 19:35:45.416716   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2: {Name:mka5423a609e5d353fcf2781bc07f34009b7ddf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.416906   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2 ...
	I1205 19:35:45.416923   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2: {Name:mk0d07d5437f4e7279b33579f3008cf206aa6385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.417020   13952 certs.go:337] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt
	I1205 19:35:45.417092   13952 certs.go:341] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key
	I1205 19:35:45.417135   13952 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key
	I1205 19:35:45.417150   13952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt with IP's: []
	I1205 19:35:45.546894   13952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt ...
	I1205 19:35:45.546923   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt: {Name:mk9124eef00a7036c57f9f2e6af0f9d7a6374656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.547108   13952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key ...
	I1205 19:35:45.547129   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key: {Name:mkeaf51739a91a52ff0f836d5af9486da7395742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:45.547328   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:35:45.547362   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:35:45.547388   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:35:45.547420   13952 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem (1679 bytes)
	I1205 19:35:45.548005   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:35:45.568705   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:35:45.588772   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:35:45.608805   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:35:45.628990   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:35:45.649132   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:35:45.669319   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:35:45.689205   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:35:45.710156   13952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:35:45.730948   13952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:35:45.746346   13952 ssh_runner.go:195] Run: openssl version
	I1205 19:35:45.750992   13952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:35:45.758948   13952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:45.761874   13952 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:45.761910   13952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:45.767835   13952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:35:45.776236   13952 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:35:45.778997   13952 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:35:45.779039   13952 kubeadm.go:404] StartCluster: {Name:addons-030936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-030936 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:45.779098   13952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:35:45.779137   13952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:35:45.809722   13952 cri.go:89] found id: ""
	I1205 19:35:45.809778   13952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:35:45.817351   13952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:35:45.825483   13952 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:35:45.825523   13952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:35:45.832919   13952 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:35:45.832956   13952 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:35:45.907422   13952 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1205 19:35:45.966264   13952 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:35:54.544970   13952 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:35:54.545039   13952 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:35:54.545156   13952 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:35:54.545249   13952 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1205 19:35:54.545297   13952 kubeadm.go:322] OS: Linux
	I1205 19:35:54.545363   13952 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:35:54.545479   13952 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:35:54.545549   13952 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:35:54.545630   13952 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:35:54.545717   13952 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:35:54.545779   13952 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:35:54.545846   13952 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1205 19:35:54.545894   13952 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1205 19:35:54.545960   13952 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1205 19:35:54.546039   13952 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:35:54.546169   13952 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:35:54.546301   13952 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:35:54.546392   13952 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:35:54.548066   13952 out.go:204]   - Generating certificates and keys ...
	I1205 19:35:54.548148   13952 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:35:54.548259   13952 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:35:54.548358   13952 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:35:54.548432   13952 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:35:54.548508   13952 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:35:54.548570   13952 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:35:54.548660   13952 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:35:54.548825   13952 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-030936 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:35:54.548901   13952 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:35:54.549046   13952 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-030936 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:35:54.549138   13952 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:35:54.549228   13952 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:35:54.549288   13952 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:35:54.549365   13952 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:35:54.549422   13952 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:35:54.549469   13952 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:35:54.549530   13952 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:35:54.549575   13952 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:35:54.549648   13952 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:35:54.549701   13952 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:35:54.551362   13952 out.go:204]   - Booting up control plane ...
	I1205 19:35:54.551432   13952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:35:54.551527   13952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:35:54.551600   13952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:35:54.551713   13952 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:35:54.551788   13952 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:35:54.551821   13952 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:35:54.551954   13952 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:35:54.552020   13952 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002284 seconds
	I1205 19:35:54.552140   13952 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:35:54.552278   13952 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:35:54.552334   13952 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:35:54.552491   13952 kubeadm.go:322] [mark-control-plane] Marking the node addons-030936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:35:54.552542   13952 kubeadm.go:322] [bootstrap-token] Using token: wzh2ds.ktaesz4l7xwfj2en
	I1205 19:35:54.553848   13952 out.go:204]   - Configuring RBAC rules ...
	I1205 19:35:54.553960   13952 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:35:54.554057   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:35:54.554223   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:35:54.554379   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:35:54.554542   13952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:35:54.554680   13952 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:35:54.554839   13952 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:35:54.554879   13952 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:35:54.554923   13952 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:35:54.554930   13952 kubeadm.go:322] 
	I1205 19:35:54.554993   13952 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:35:54.555003   13952 kubeadm.go:322] 
	I1205 19:35:54.555076   13952 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:35:54.555085   13952 kubeadm.go:322] 
	I1205 19:35:54.555116   13952 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:35:54.555175   13952 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:35:54.555218   13952 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:35:54.555224   13952 kubeadm.go:322] 
	I1205 19:35:54.555272   13952 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:35:54.555281   13952 kubeadm.go:322] 
	I1205 19:35:54.555328   13952 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:35:54.555334   13952 kubeadm.go:322] 
	I1205 19:35:54.555375   13952 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:35:54.555438   13952 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:35:54.555502   13952 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:35:54.555508   13952 kubeadm.go:322] 
	I1205 19:35:54.555597   13952 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:35:54.555695   13952 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:35:54.555706   13952 kubeadm.go:322] 
	I1205 19:35:54.555813   13952 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wzh2ds.ktaesz4l7xwfj2en \
	I1205 19:35:54.555956   13952 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de \
	I1205 19:35:54.555989   13952 kubeadm.go:322] 	--control-plane 
	I1205 19:35:54.555998   13952 kubeadm.go:322] 
	I1205 19:35:54.556104   13952 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:35:54.556113   13952 kubeadm.go:322] 
	I1205 19:35:54.556241   13952 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wzh2ds.ktaesz4l7xwfj2en \
	I1205 19:35:54.556385   13952 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de 
	I1205 19:35:54.556398   13952 cni.go:84] Creating CNI manager for ""
	I1205 19:35:54.556410   13952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:54.557819   13952 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:35:54.558986   13952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:35:54.562381   13952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 19:35:54.562395   13952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:35:54.577410   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:35:55.178013   13952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:35:55.178136   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.178153   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=addons-030936 minikube.k8s.io/updated_at=2023_12_05T19_35_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.185098   13952 ops.go:34] apiserver oom_adj: -16
	I1205 19:35:55.255720   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.316886   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:55.879855   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:56.379991   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:56.879407   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:57.379915   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:57.879455   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:58.379497   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:58.879642   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.379000   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.879083   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.379959   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.879906   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.378981   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.879663   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.378980   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.879650   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.379783   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.879884   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.379478   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.879908   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.379727   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.879032   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.379866   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.879589   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.379775   13952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.450764   13952 kubeadm.go:1088] duration metric: took 12.272685303s to wait for elevateKubeSystemPrivileges.
	I1205 19:36:07.450794   13952 kubeadm.go:406] StartCluster complete in 21.671758877s
	I1205 19:36:07.450816   13952 settings.go:142] acquiring lock: {Name:mkfaf26f24f59aefb8a41893ed2faf05d01ae7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:07.450931   13952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:36:07.451355   13952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/kubeconfig: {Name:mk1f41ec1ae8a6de6a6de4f641695e135340252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:07.451533   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:36:07.451613   13952 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1205 19:36:07.451716   13952 addons.go:69] Setting volumesnapshots=true in profile "addons-030936"
	I1205 19:36:07.451725   13952 addons.go:69] Setting helm-tiller=true in profile "addons-030936"
	I1205 19:36:07.451740   13952 addons.go:69] Setting metrics-server=true in profile "addons-030936"
	I1205 19:36:07.451744   13952 config.go:182] Loaded profile config "addons-030936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:07.451747   13952 addons.go:231] Setting addon volumesnapshots=true in "addons-030936"
	I1205 19:36:07.451754   13952 addons.go:231] Setting addon helm-tiller=true in "addons-030936"
	I1205 19:36:07.451769   13952 addons.go:69] Setting inspektor-gadget=true in profile "addons-030936"
	I1205 19:36:07.451778   13952 addons.go:69] Setting ingress=true in profile "addons-030936"
	I1205 19:36:07.451781   13952 addons.go:69] Setting default-storageclass=true in profile "addons-030936"
	I1205 19:36:07.451779   13952 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-030936"
	I1205 19:36:07.451794   13952 addons.go:231] Setting addon ingress=true in "addons-030936"
	I1205 19:36:07.451796   13952 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-030936"
	I1205 19:36:07.451801   13952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-030936"
	I1205 19:36:07.451808   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451792   13952 addons.go:69] Setting cloud-spanner=true in profile "addons-030936"
	I1205 19:36:07.451819   13952 addons.go:69] Setting storage-provisioner=true in profile "addons-030936"
	I1205 19:36:07.451819   13952 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-030936"
	I1205 19:36:07.451829   13952 addons.go:231] Setting addon storage-provisioner=true in "addons-030936"
	I1205 19:36:07.451836   13952 addons.go:231] Setting addon cloud-spanner=true in "addons-030936"
	I1205 19:36:07.451838   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451847   13952 addons.go:69] Setting gcp-auth=true in profile "addons-030936"
	I1205 19:36:07.451848   13952 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-030936"
	I1205 19:36:07.451865   13952 mustload.go:65] Loading cluster: addons-030936
	I1205 19:36:07.451882   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451898   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451934   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451836   13952 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-030936"
	I1205 19:36:07.452004   13952 config.go:182] Loaded profile config "addons-030936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:07.452165   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452187   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452244   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452323   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452350   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452362   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451785   13952 addons.go:231] Setting addon inspektor-gadget=true in "addons-030936"
	I1205 19:36:07.452692   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.452729   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451758   13952 addons.go:69] Setting ingress-dns=true in profile "addons-030936"
	I1205 19:36:07.452855   13952 addons.go:231] Setting addon ingress-dns=true in "addons-030936"
	I1205 19:36:07.452897   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.452362   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.453171   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.453338   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451809   13952 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-030936"
	I1205 19:36:07.455268   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.455713   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451758   13952 addons.go:231] Setting addon metrics-server=true in "addons-030936"
	I1205 19:36:07.457461   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.457898   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.451808   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.451791   13952 addons.go:69] Setting registry=true in profile "addons-030936"
	I1205 19:36:07.458882   13952 addons.go:231] Setting addon registry=true in "addons-030936"
	I1205 19:36:07.458957   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.459285   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.464461   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.495374   13952 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-030936" context rescaled to 1 replicas
	I1205 19:36:07.495423   13952 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:07.498546   13952 out.go:177] * Verifying Kubernetes components...
	I1205 19:36:07.497621   13952 addons.go:231] Setting addon default-storageclass=true in "addons-030936"
	I1205 19:36:07.502587   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1205 19:36:07.500898   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:36:07.500898   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.506818   13952 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:36:07.504870   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.510938   13952 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1205 19:36:07.512212   13952 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1205 19:36:07.513499   13952 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:36:07.508391   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:07.508396   13952 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1205 19:36:07.510877   13952 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-030936"
	I1205 19:36:07.508377   13952 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:07.512233   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1205 19:36:07.514956   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:36:07.514983   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.516122   13952 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1205 19:36:07.516180   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.520153   13952 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1205 19:36:07.522083   13952 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:07.523626   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:07.520276   13952 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1205 19:36:07.523660   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1205 19:36:07.523711   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.520490   13952 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1205 19:36:07.520771   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:07.520252   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.521941   13952 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:36:07.522103   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:36:07.523773   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1205 19:36:07.523834   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.526967   13952 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1205 19:36:07.526980   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:36:07.526988   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:36:07.527253   13952 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:07.527306   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.531455   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:36:07.531482   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:36:07.531541   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.533323   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:07.529453   13952 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:07.529510   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1205 19:36:07.540340   13952 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1205 19:36:07.540365   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:36:07.542076   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:36:07.542094   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:36:07.542140   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.542149   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.549249   13952 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:07.549276   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:36:07.549336   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.552318   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:36:07.542484   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.555426   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:36:07.556785   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:36:07.557754   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.560854   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:36:07.563371   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:36:07.565127   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:36:07.566581   13952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:36:07.568048   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:36:07.568071   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:36:07.568125   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.569496   13952 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:36:07.568331   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.570178   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.572756   13952 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:36:07.574608   13952 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:07.574624   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:36:07.574679   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.584005   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.591395   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.593060   13952 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:07.593078   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:36:07.593184   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:07.601820   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.602558   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.603671   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.609023   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.612189   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.613325   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.615205   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.615663   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:07.627525   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:36:07.628714   13952 node_ready.go:35] waiting up to 6m0s for node "addons-030936" to be "Ready" ...
	W1205 19:36:07.632407   13952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 19:36:07.632457   13952 retry.go:31] will retry after 273.42402ms: ssh: handshake failed: EOF
	I1205 19:36:07.830288   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:07.929696   13952 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:36:07.929729   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:36:07.933858   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:07.942331   13952 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1205 19:36:07.942363   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1205 19:36:08.044517   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:08.044792   13952 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1205 19:36:08.044853   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1205 19:36:08.045346   13952 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:08.045385   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:36:08.048297   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:08.124926   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:36:08.125008   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:36:08.126151   13952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:36:08.126176   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:36:08.127282   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:08.131218   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:36:08.131244   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:36:08.135344   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:08.237538   13952 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:08.237571   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1205 19:36:08.247265   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:08.327946   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:36:08.328024   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:36:08.334821   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:36:08.334847   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:36:08.339798   13952 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1205 19:36:08.339875   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1205 19:36:08.343195   13952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:36:08.343298   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:36:08.636628   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:08.638717   13952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:36:08.638785   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:36:08.649411   13952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:08.649441   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:36:08.725724   13952 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1205 19:36:08.725751   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1205 19:36:08.738447   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:36:08.738478   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:36:08.741944   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:08.936728   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:36:08.936751   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:36:08.944182   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:09.042504   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:36:09.042603   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:36:09.225163   13952 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1205 19:36:09.225256   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1205 19:36:09.526363   13952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:36:09.526451   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:36:09.546658   13952 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:09.546709   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:36:09.739263   13952 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1205 19:36:09.739352   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1205 19:36:09.748099   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:09.931023   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:09.944793   13952 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.317228086s)
	I1205 19:36:09.944830   13952 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:36:10.027316   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:36:10.027342   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:36:10.041285   13952 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:36:10.041313   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1205 19:36:10.332881   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:36:10.332910   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:36:10.544893   13952 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:10.544939   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1205 19:36:10.626180   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:36:10.626218   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:36:10.926839   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:36:10.926878   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:36:11.125881   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:11.139643   13952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:11.139676   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:36:11.332072   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:11.545454   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.715080653s)
	I1205 19:36:12.131426   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.197524858s)
	I1205 19:36:12.131606   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.087034758s)
	I1205 19:36:12.131886   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.083563514s)
	I1205 19:36:12.231187   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:13.841091   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.713773721s)
	I1205 19:36:13.841120   13952 addons.go:467] Verifying addon ingress=true in "addons-030936"
	I1205 19:36:13.843028   13952 out.go:177] * Verifying ingress addon...
	I1205 19:36:13.841203   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.705787911s)
	I1205 19:36:13.841260   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.593949702s)
	I1205 19:36:13.841306   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.204591211s)
	I1205 19:36:13.841475   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.09948472s)
	I1205 19:36:13.841545   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897278766s)
	I1205 19:36:13.841651   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.910544552s)
	I1205 19:36:13.841686   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.715770095s)
	I1205 19:36:13.843084   13952 addons.go:467] Verifying addon metrics-server=true in "addons-030936"
	W1205 19:36:13.843120   13952 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:13.844750   13952 retry.go:31] will retry after 303.061165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:13.843121   13952 addons.go:467] Verifying addon registry=true in "addons-030936"
	I1205 19:36:13.846361   13952 out.go:177] * Verifying registry addon...
	I1205 19:36:13.845478   13952 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:36:13.848607   13952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:36:13.852547   13952 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:36:13.852571   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:13.853627   13952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:13.853646   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:13.855575   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:13.856284   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:14.148230   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:14.341219   13952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:36:14.341308   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:14.360182   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:14.360482   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:14.362044   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:14.638429   13952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:36:14.732175   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:14.738099   13952 addons.go:231] Setting addon gcp-auth=true in "addons-030936"
	I1205 19:36:14.738156   13952 host.go:66] Checking if "addons-030936" exists ...
	I1205 19:36:14.738647   13952 cli_runner.go:164] Run: docker container inspect addons-030936 --format={{.State.Status}}
	I1205 19:36:14.758604   13952 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:36:14.758655   13952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-030936
	I1205 19:36:14.775146   13952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/addons-030936/id_rsa Username:docker}
	I1205 19:36:14.833092   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.500911168s)
	I1205 19:36:14.833134   13952 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-030936"
	I1205 19:36:14.835189   13952 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:36:14.837440   13952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:36:14.842910   13952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:14.842940   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:14.852072   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:14.859164   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:14.925491   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:15.427725   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:15.427805   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:15.428400   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:15.929617   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:15.930577   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:15.931401   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:16.430299   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:16.431318   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:16.432160   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:16.639334   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.49105348s)
	I1205 19:36:16.639348   13952 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.880711701s)
	I1205 19:36:16.641734   13952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:16.643796   13952 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1205 19:36:16.645602   13952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:36:16.645628   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:36:16.724870   13952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:36:16.724941   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:36:16.748296   13952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:16.748324   13952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1205 19:36:16.840353   13952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:16.856557   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:16.928679   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:16.929051   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:17.226347   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:17.358273   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:17.427531   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:17.428718   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:17.859577   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:17.861739   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:17.862516   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:18.431589   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:18.432628   13952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.592231697s)
	I1205 19:36:18.433737   13952 addons.go:467] Verifying addon gcp-auth=true in "addons-030936"
	I1205 19:36:18.435486   13952 out.go:177] * Verifying gcp-auth addon...
	I1205 19:36:18.437892   13952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:36:18.439950   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:18.440497   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:18.442225   13952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:36:18.442271   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:18.446697   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:18.856381   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:18.859278   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:18.860306   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:18.950522   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:19.357176   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:19.359361   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:19.359441   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:19.450416   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:19.656464   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:19.855775   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:19.858721   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:19.859260   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:19.950211   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:20.356446   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:20.358792   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.359418   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.450125   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:20.857275   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:20.859910   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.859971   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.949866   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:21.355478   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.359118   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.360085   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.449799   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:21.855959   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.859213   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.859493   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.950135   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:22.155842   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:22.356475   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.359008   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.359394   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.450113   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:22.855912   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.859198   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.859227   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.950019   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.356101   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.358990   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.359330   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.449691   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.855676   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.858676   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.859289   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.949928   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.355892   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.358898   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.359666   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.450565   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.657659   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:24.856375   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.858946   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.859079   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.949746   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.356872   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.359183   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.359196   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.449977   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.856359   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.858881   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.859094   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.949775   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.356896   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.359730   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.360041   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:26.450302   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.856341   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.858578   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:26.859001   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.949678   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.155196   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:27.356826   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.359072   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:27.359175   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.449846   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.856462   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.858817   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.859162   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:27.949971   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.356434   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.359218   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.359328   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.449733   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.855628   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.858361   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.859884   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.949382   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.155765   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:29.356190   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.359228   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.359646   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.450223   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.856432   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.859282   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.859289   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.949797   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.355744   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.358735   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.359319   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.450171   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.856625   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.859071   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.859274   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.949918   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.355705   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.358609   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.360145   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.449997   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.655886   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:31.856091   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.858802   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.859707   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.950451   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.356110   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.358857   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.359619   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.450075   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.855614   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.858518   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.860061   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.949912   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.355847   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.359046   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:33.359047   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.449927   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.857111   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.859462   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:33.859668   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.950529   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:34.156053   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:34.356755   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.359780   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:34.361574   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.450308   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:34.856542   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.859093   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:34.859375   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.950042   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.356598   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.359636   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.359722   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.450531   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.856829   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.859098   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.859163   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.949793   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.355636   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.358785   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.360018   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.449842   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.655266   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:36.855621   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.858635   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.860178   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.953615   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.356711   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.359575   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.359844   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.450027   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.856223   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.859036   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.859372   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.949911   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.356026   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.358681   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.359629   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.450139   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.655820   13952 node_ready.go:58] node "addons-030936" has status "Ready":"False"
	I1205 19:36:38.856488   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.858815   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.859186   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.949788   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.356487   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.359021   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.359093   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.449895   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.856910   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.859218   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.859382   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.950438   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.356557   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.359311   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.359348   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.454574   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.657354   13952 node_ready.go:49] node "addons-030936" has status "Ready":"True"
	I1205 19:36:40.657384   13952 node_ready.go:38] duration metric: took 33.028648818s waiting for node "addons-030936" to be "Ready" ...
	I1205 19:36:40.657397   13952 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:36:40.666589   13952 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:40.857077   13952 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:40.857102   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.860191   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.861143   13952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:40.861163   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.949572   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.360097   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.360257   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.425406   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.450376   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.857726   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.860634   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.860844   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.949955   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.357200   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.359720   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.360424   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.449985   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.685166   13952 pod_ready.go:102] pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:42.859175   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.860998   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.862322   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.950722   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.185880   13952 pod_ready.go:92] pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.185907   13952 pod_ready.go:81] duration metric: took 2.519289974s waiting for pod "coredns-5dd5756b68-cvgxt" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.185929   13952 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.190857   13952 pod_ready.go:92] pod "etcd-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.190880   13952 pod_ready.go:81] duration metric: took 4.943475ms waiting for pod "etcd-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.190893   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.195784   13952 pod_ready.go:92] pod "kube-apiserver-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.195805   13952 pod_ready.go:81] duration metric: took 4.90688ms waiting for pod "kube-apiserver-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.195818   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.200689   13952 pod_ready.go:92] pod "kube-controller-manager-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.200711   13952 pod_ready.go:81] duration metric: took 4.888204ms waiting for pod "kube-controller-manager-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.200722   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kp9gj" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.358211   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.359995   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.361729   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.450400   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.456622   13952 pod_ready.go:92] pod "kube-proxy-kp9gj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.456650   13952 pod_ready.go:81] duration metric: took 255.922458ms waiting for pod "kube-proxy-kp9gj" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.456659   13952 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.856560   13952 pod_ready.go:92] pod "kube-scheduler-addons-030936" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:43.856590   13952 pod_ready.go:81] duration metric: took 399.925066ms waiting for pod "kube-scheduler-addons-030936" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.856601   13952 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:43.858740   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.860713   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.861203   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.949904   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.357077   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.359412   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.360767   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.450478   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.858590   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.859256   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.860080   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.949605   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.358536   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.359512   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.360764   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.450671   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.857805   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.860584   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.925707   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.950235   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.231698   13952 pod_ready.go:102] pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:46.430856   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.433411   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:46.434025   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.452969   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.858567   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.860481   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.861166   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:46.950759   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.358589   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.362122   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.362450   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.450227   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.857260   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.859383   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.860286   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.949719   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.163001   13952 pod_ready.go:92] pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:48.163028   13952 pod_ready.go:81] duration metric: took 4.306419505s waiting for pod "metrics-server-7c66d45ddc-8586h" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:48.163039   13952 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:48.357540   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.359214   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.360179   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.450357   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.857387   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.859160   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.860677   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.950414   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.357588   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.360066   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.361435   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.450112   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.857300   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.859457   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.859656   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.950779   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.263848   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:50.358345   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.359611   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.360369   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.450295   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.938212   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.938426   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.939026   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.970067   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.358053   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.359574   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.360337   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.450614   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.926687   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.929719   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.930112   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.953723   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.331915   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:52.436873   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.438708   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.534796   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.539255   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.858888   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.859517   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.860664   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.950870   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.358205   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:53.360205   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.360527   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.450267   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.858643   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:53.865964   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.865987   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.951100   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.357974   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.362654   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.362686   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.450423   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.763709   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:54.858387   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.860119   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.861198   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.949766   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.358530   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.360265   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.361228   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.450256   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.858202   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.860084   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.860712   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.950429   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.357660   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.359942   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.360214   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:56.451237   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.763847   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:56.858313   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.862140   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:56.862708   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.950345   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.357824   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.359884   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.360365   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.449731   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.858399   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.928042   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.928080   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.951915   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:58.358312   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.359768   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.361037   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.450926   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:58.857259   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.859851   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.860635   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.950458   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.263568   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:59.357140   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.359206   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.360342   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.449928   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.857911   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.860030   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.860373   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.950317   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.357318   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.359352   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.360418   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.450095   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.859373   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.862931   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.863053   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.950213   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.357555   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.360383   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.360515   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.450206   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.763247   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:01.857354   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.859495   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.859540   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.949920   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.357402   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.359434   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.360385   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.450104   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.857116   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.861055   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.861096   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.950570   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.357518   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.358894   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.360928   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.450466   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.763331   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:03.857162   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.859398   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.860735   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.950066   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.433232   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.437946   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.438867   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.526931   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.931576   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.932493   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.934497   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.027706   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.357424   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.359332   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.360810   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.450702   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.763451   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:05.857617   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.860621   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.861517   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.950413   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.358330   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.360563   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.360764   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.450463   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.858058   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.860254   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.862162   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.950383   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.356814   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.359316   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.360857   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.450816   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.763736   13952 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:07.858169   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.859358   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.860133   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.950828   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.357854   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.359357   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.360269   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.450097   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.861294   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.862156   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.864702   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.950206   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.357068   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.359119   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.360583   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.453106   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.857895   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.860287   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.862521   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.953607   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.264320   13952 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:10.264346   13952 pod_ready.go:81] duration metric: took 22.101299085s waiting for pod "nvidia-device-plugin-daemonset-wnvvv" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:10.264373   13952 pod_ready.go:38] duration metric: took 29.606962926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:10.264396   13952 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:37:10.264428   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:37:10.264491   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:37:10.342654   13952 cri.go:89] found id: "3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:10.342679   13952 cri.go:89] found id: ""
	I1205 19:37:10.342690   13952 logs.go:284] 1 containers: [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646]
	I1205 19:37:10.342742   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.346118   13952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:37:10.346191   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:37:10.358919   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.359844   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.360539   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.445053   13952 cri.go:89] found id: "722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:10.445080   13952 cri.go:89] found id: ""
	I1205 19:37:10.445089   13952 logs.go:284] 1 containers: [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5]
	I1205 19:37:10.445141   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.448474   13952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:37:10.448538   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:37:10.450851   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.542451   13952 cri.go:89] found id: "ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:10.542480   13952 cri.go:89] found id: ""
	I1205 19:37:10.542490   13952 logs.go:284] 1 containers: [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f]
	I1205 19:37:10.542543   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.546230   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:37:10.546304   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:37:10.640759   13952 cri.go:89] found id: "479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:10.640782   13952 cri.go:89] found id: ""
	I1205 19:37:10.640792   13952 logs.go:284] 1 containers: [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b]
	I1205 19:37:10.640840   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.644284   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:37:10.644383   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:37:10.746496   13952 cri.go:89] found id: "6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:10.746528   13952 cri.go:89] found id: ""
	I1205 19:37:10.746538   13952 logs.go:284] 1 containers: [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093]
	I1205 19:37:10.746591   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.750572   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:37:10.750650   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:37:10.834441   13952 cri.go:89] found id: "f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:10.834468   13952 cri.go:89] found id: ""
	I1205 19:37:10.834478   13952 logs.go:284] 1 containers: [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc]
	I1205 19:37:10.834533   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.838018   13952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:37:10.838081   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:37:10.859035   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.860168   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.860873   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.928557   13952 cri.go:89] found id: "6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:10.928586   13952 cri.go:89] found id: ""
	I1205 19:37:10.928596   13952 logs.go:284] 1 containers: [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4]
	I1205 19:37:10.928652   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:10.932263   13952 logs.go:123] Gathering logs for kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] ...
	I1205 19:37:10.932291   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:10.950515   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.980877   13952 logs.go:123] Gathering logs for etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] ...
	I1205 19:37:10.980913   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:11.075311   13952 logs.go:123] Gathering logs for coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] ...
	I1205 19:37:11.075364   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:11.159406   13952 logs.go:123] Gathering logs for kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] ...
	I1205 19:37:11.159440   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:11.249807   13952 logs.go:123] Gathering logs for kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] ...
	I1205 19:37:11.249833   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:11.358669   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.358925   13952 logs.go:123] Gathering logs for kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] ...
	I1205 19:37:11.358985   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:11.359860   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.360136   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.429967   13952 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:37:11.429999   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:37:11.450355   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.507699   13952 logs.go:123] Gathering logs for dmesg ...
	I1205 19:37:11.507733   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:37:11.535713   13952 logs.go:123] Gathering logs for container status ...
	I1205 19:37:11.535740   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:37:11.580965   13952 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:37:11.580998   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:37:11.762121   13952 logs.go:123] Gathering logs for kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] ...
	I1205 19:37:11.762163   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:11.842546   13952 logs.go:123] Gathering logs for kubelet ...
	I1205 19:37:11.842586   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:37:11.862592   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.863864   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.865161   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.951147   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.358525   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.360313   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.360425   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.450115   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.857391   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.859419   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.860882   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.950889   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.357424   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.359831   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.360131   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.449699   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.933842   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.938141   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.939868   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.030743   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.358491   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.427061   13952 kapi.go:107] duration metric: took 1m0.578449612s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:37:14.427287   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.437631   13952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:37:14.450830   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.526059   13952 api_server.go:72] duration metric: took 1m7.030601625s to wait for apiserver process to appear ...
	I1205 19:37:14.526089   13952 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:37:14.526126   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:37:14.526187   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:37:14.737387   13952 cri.go:89] found id: "3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:14.737415   13952 cri.go:89] found id: ""
	I1205 19:37:14.737437   13952 logs.go:284] 1 containers: [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646]
	I1205 19:37:14.737487   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:14.742634   13952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:37:14.742742   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:37:14.929001   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.929017   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.937007   13952 cri.go:89] found id: "722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:14.937035   13952 cri.go:89] found id: ""
	I1205 19:37:14.937045   13952 logs.go:284] 1 containers: [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5]
	I1205 19:37:14.937102   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:14.940963   13952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:37:14.941026   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:37:14.954544   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.142462   13952 cri.go:89] found id: "ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:15.142491   13952 cri.go:89] found id: ""
	I1205 19:37:15.142501   13952 logs.go:284] 1 containers: [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f]
	I1205 19:37:15.142562   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.146074   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:37:15.146142   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:37:15.327506   13952 cri.go:89] found id: "479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:15.327532   13952 cri.go:89] found id: ""
	I1205 19:37:15.327541   13952 logs.go:284] 1 containers: [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b]
	I1205 19:37:15.327589   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.331933   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:37:15.331995   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:37:15.358699   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.427780   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.445833   13952 cri.go:89] found id: "6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:15.445859   13952 cri.go:89] found id: ""
	I1205 19:37:15.445868   13952 logs.go:284] 1 containers: [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093]
	I1205 19:37:15.445919   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.449832   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.449942   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:37:15.449995   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:37:15.543218   13952 cri.go:89] found id: "f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:15.543294   13952 cri.go:89] found id: ""
	I1205 19:37:15.543309   13952 logs.go:284] 1 containers: [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc]
	I1205 19:37:15.543364   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.546899   13952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:37:15.546958   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:37:15.644182   13952 cri.go:89] found id: "6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:15.644285   13952 cri.go:89] found id: ""
	I1205 19:37:15.644298   13952 logs.go:284] 1 containers: [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4]
	I1205 19:37:15.644358   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:15.648312   13952 logs.go:123] Gathering logs for container status ...
	I1205 19:37:15.648335   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:37:15.750164   13952 logs.go:123] Gathering logs for kubelet ...
	I1205 19:37:15.750193   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:37:15.858393   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.860174   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.922401   13952 logs.go:123] Gathering logs for dmesg ...
	I1205 19:37:15.922451   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:37:15.936130   13952 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:37:15.936156   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:37:15.950094   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.084352   13952 logs.go:123] Gathering logs for coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] ...
	I1205 19:37:16.084382   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:16.155159   13952 logs.go:123] Gathering logs for kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] ...
	I1205 19:37:16.155195   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:16.229835   13952 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:37:16.229868   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:37:16.311136   13952 logs.go:123] Gathering logs for kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] ...
	I1205 19:37:16.311171   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:16.358700   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.359192   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.364960   13952 logs.go:123] Gathering logs for etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] ...
	I1205 19:37:16.364991   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:16.444776   13952 logs.go:123] Gathering logs for kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] ...
	I1205 19:37:16.444815   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:16.449909   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.485879   13952 logs.go:123] Gathering logs for kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] ...
	I1205 19:37:16.485911   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:16.564267   13952 logs.go:123] Gathering logs for kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] ...
	I1205 19:37:16.564302   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:16.857710   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.860683   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.950931   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.357943   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.361662   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.449707   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.858234   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.859496   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.950606   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.357361   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.359745   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.449800   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.857494   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.859743   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.950817   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.176767   13952 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:37:19.182696   13952 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:37:19.183980   13952 api_server.go:141] control plane version: v1.28.4
	I1205 19:37:19.184001   13952 api_server.go:131] duration metric: took 4.657906109s to wait for apiserver health ...
	I1205 19:37:19.184009   13952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:37:19.184029   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 19:37:19.184068   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 19:37:19.218186   13952 cri.go:89] found id: "3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:19.218213   13952 cri.go:89] found id: ""
	I1205 19:37:19.218222   13952 logs.go:284] 1 containers: [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646]
	I1205 19:37:19.218280   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.221499   13952 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 19:37:19.221559   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 19:37:19.254582   13952 cri.go:89] found id: "722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:19.254616   13952 cri.go:89] found id: ""
	I1205 19:37:19.254627   13952 logs.go:284] 1 containers: [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5]
	I1205 19:37:19.254674   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.257838   13952 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 19:37:19.257887   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 19:37:19.291757   13952 cri.go:89] found id: "ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:19.291779   13952 cri.go:89] found id: ""
	I1205 19:37:19.291789   13952 logs.go:284] 1 containers: [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f]
	I1205 19:37:19.291839   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.295767   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 19:37:19.295835   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 19:37:19.355474   13952 cri.go:89] found id: "479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:19.355500   13952 cri.go:89] found id: ""
	I1205 19:37:19.355510   13952 logs.go:284] 1 containers: [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b]
	I1205 19:37:19.355559   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.357374   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.358848   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.359155   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 19:37:19.359209   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 19:37:19.391940   13952 cri.go:89] found id: "6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:19.391967   13952 cri.go:89] found id: ""
	I1205 19:37:19.391978   13952 logs.go:284] 1 containers: [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093]
	I1205 19:37:19.392030   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.428497   13952 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 19:37:19.428562   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 19:37:19.450849   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.527547   13952 cri.go:89] found id: "f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:19.527575   13952 cri.go:89] found id: ""
	I1205 19:37:19.527586   13952 logs.go:284] 1 containers: [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc]
	I1205 19:37:19.527640   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.531342   13952 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 19:37:19.531395   13952 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 19:37:19.633476   13952 cri.go:89] found id: "6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:19.633500   13952 cri.go:89] found id: ""
	I1205 19:37:19.633509   13952 logs.go:284] 1 containers: [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4]
	I1205 19:37:19.633564   13952 ssh_runner.go:195] Run: which crictl
	I1205 19:37:19.637538   13952 logs.go:123] Gathering logs for dmesg ...
	I1205 19:37:19.637566   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 19:37:19.651036   13952 logs.go:123] Gathering logs for kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] ...
	I1205 19:37:19.651072   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646"
	I1205 19:37:19.765819   13952 logs.go:123] Gathering logs for coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] ...
	I1205 19:37:19.765850   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f"
	I1205 19:37:19.858265   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.859806   13952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.862301   13952 logs.go:123] Gathering logs for kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] ...
	I1205 19:37:19.862323   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc"
	I1205 19:37:19.950108   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.976919   13952 logs.go:123] Gathering logs for container status ...
	I1205 19:37:19.976952   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 19:37:20.066936   13952 logs.go:123] Gathering logs for kubelet ...
	I1205 19:37:20.066966   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 19:37:20.201126   13952 logs.go:123] Gathering logs for etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] ...
	I1205 19:37:20.201161   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5"
	I1205 19:37:20.251427   13952 logs.go:123] Gathering logs for kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] ...
	I1205 19:37:20.251466   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b"
	I1205 19:37:20.291696   13952 logs.go:123] Gathering logs for kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] ...
	I1205 19:37:20.291725   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093"
	I1205 19:37:20.331671   13952 logs.go:123] Gathering logs for kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] ...
	I1205 19:37:20.331699   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4"
	I1205 19:37:20.357679   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.359705   13952 kapi.go:107] duration metric: took 1m6.514224456s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:37:20.365595   13952 logs.go:123] Gathering logs for CRI-O ...
	I1205 19:37:20.365618   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 19:37:20.435429   13952 logs.go:123] Gathering logs for describe nodes ...
	I1205 19:37:20.435469   13952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 19:37:20.449997   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.857483   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.950796   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.358017   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.450042   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.858241   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.950298   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.357519   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.450367   13952 kapi.go:107] duration metric: took 1m4.0124757s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:37:22.489507   13952 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-030936 cluster.
	I1205 19:37:22.624407   13952 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:37:22.646642   13952 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:37:22.857595   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.047420   13952 system_pods.go:59] 19 kube-system pods found
	I1205 19:37:23.047451   13952 system_pods.go:61] "coredns-5dd5756b68-cvgxt" [da64d584-8b3b-46ec-884f-57a0d22f1f0c] Running
	I1205 19:37:23.047455   13952 system_pods.go:61] "csi-hostpath-attacher-0" [d0f34f73-e182-4cd3-af1e-fdc87a1247fd] Running
	I1205 19:37:23.047459   13952 system_pods.go:61] "csi-hostpath-resizer-0" [ad7fe1de-e5d3-41a6-a669-5eb13661ece8] Running
	I1205 19:37:23.047466   13952 system_pods.go:61] "csi-hostpathplugin-299pr" [efebc474-fc37-42df-972e-611870fd272f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:23.047472   13952 system_pods.go:61] "etcd-addons-030936" [9084d1b2-7456-491d-ba13-81e119e50b8d] Running
	I1205 19:37:23.047478   13952 system_pods.go:61] "kindnet-b6nhd" [2863a3e1-2878-4b1f-b10e-c1f20e137d62] Running
	I1205 19:37:23.047482   13952 system_pods.go:61] "kube-apiserver-addons-030936" [16aa497e-3c1e-4dc3-a68d-03e471801572] Running
	I1205 19:37:23.047488   13952 system_pods.go:61] "kube-controller-manager-addons-030936" [a14860cb-50ff-49fa-a08b-ee3282939d60] Running
	I1205 19:37:23.047496   13952 system_pods.go:61] "kube-ingress-dns-minikube" [f6dfd03f-7966-4c37-89c4-e5a4a1c2e395] Running
	I1205 19:37:23.047500   13952 system_pods.go:61] "kube-proxy-kp9gj" [ef75f123-2e3d-4345-be48-46c46e8aa537] Running
	I1205 19:37:23.047507   13952 system_pods.go:61] "kube-scheduler-addons-030936" [23a4feca-cd28-4ed4-b9a6-85cc60e7843f] Running
	I1205 19:37:23.047511   13952 system_pods.go:61] "metrics-server-7c66d45ddc-8586h" [22718867-f984-4ef4-846c-45896c7a82bf] Running
	I1205 19:37:23.047517   13952 system_pods.go:61] "nvidia-device-plugin-daemonset-wnvvv" [78a4b26e-4608-4170-8a6a-de17b217468b] Running
	I1205 19:37:23.047521   13952 system_pods.go:61] "registry-hmgc4" [4f36e16b-74e5-4183-ae54-777afcc87dc9] Running
	I1205 19:37:23.047525   13952 system_pods.go:61] "registry-proxy-9wsfw" [23d952c3-eba0-4788-b241-d477ed5081a1] Running
	I1205 19:37:23.047529   13952 system_pods.go:61] "snapshot-controller-58dbcc7b99-gqmd7" [87f47f40-88ec-4064-8493-13ec94933413] Running
	I1205 19:37:23.047535   13952 system_pods.go:61] "snapshot-controller-58dbcc7b99-qhmfd" [8e7a0adb-b7df-409f-978d-28c4e57c2cfb] Running
	I1205 19:37:23.047539   13952 system_pods.go:61] "storage-provisioner" [ef82e7dd-313a-447e-84be-b95404c573a6] Running
	I1205 19:37:23.047545   13952 system_pods.go:61] "tiller-deploy-7b677967b9-cdtmt" [9203256c-9bc5-49b8-8ef1-47ca632955a8] Running
	I1205 19:37:23.047551   13952 system_pods.go:74] duration metric: took 3.863537598s to wait for pod list to return data ...
	I1205 19:37:23.047560   13952 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:37:23.049471   13952 default_sa.go:45] found service account: "default"
	I1205 19:37:23.049492   13952 default_sa.go:55] duration metric: took 1.923353ms for default service account to be created ...
	I1205 19:37:23.049501   13952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:37:23.057551   13952 system_pods.go:86] 19 kube-system pods found
	I1205 19:37:23.057579   13952 system_pods.go:89] "coredns-5dd5756b68-cvgxt" [da64d584-8b3b-46ec-884f-57a0d22f1f0c] Running
	I1205 19:37:23.057585   13952 system_pods.go:89] "csi-hostpath-attacher-0" [d0f34f73-e182-4cd3-af1e-fdc87a1247fd] Running
	I1205 19:37:23.057590   13952 system_pods.go:89] "csi-hostpath-resizer-0" [ad7fe1de-e5d3-41a6-a669-5eb13661ece8] Running
	I1205 19:37:23.057598   13952 system_pods.go:89] "csi-hostpathplugin-299pr" [efebc474-fc37-42df-972e-611870fd272f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:23.057604   13952 system_pods.go:89] "etcd-addons-030936" [9084d1b2-7456-491d-ba13-81e119e50b8d] Running
	I1205 19:37:23.057611   13952 system_pods.go:89] "kindnet-b6nhd" [2863a3e1-2878-4b1f-b10e-c1f20e137d62] Running
	I1205 19:37:23.057615   13952 system_pods.go:89] "kube-apiserver-addons-030936" [16aa497e-3c1e-4dc3-a68d-03e471801572] Running
	I1205 19:37:23.057622   13952 system_pods.go:89] "kube-controller-manager-addons-030936" [a14860cb-50ff-49fa-a08b-ee3282939d60] Running
	I1205 19:37:23.057627   13952 system_pods.go:89] "kube-ingress-dns-minikube" [f6dfd03f-7966-4c37-89c4-e5a4a1c2e395] Running
	I1205 19:37:23.057631   13952 system_pods.go:89] "kube-proxy-kp9gj" [ef75f123-2e3d-4345-be48-46c46e8aa537] Running
	I1205 19:37:23.057635   13952 system_pods.go:89] "kube-scheduler-addons-030936" [23a4feca-cd28-4ed4-b9a6-85cc60e7843f] Running
	I1205 19:37:23.057642   13952 system_pods.go:89] "metrics-server-7c66d45ddc-8586h" [22718867-f984-4ef4-846c-45896c7a82bf] Running
	I1205 19:37:23.057647   13952 system_pods.go:89] "nvidia-device-plugin-daemonset-wnvvv" [78a4b26e-4608-4170-8a6a-de17b217468b] Running
	I1205 19:37:23.057650   13952 system_pods.go:89] "registry-hmgc4" [4f36e16b-74e5-4183-ae54-777afcc87dc9] Running
	I1205 19:37:23.057654   13952 system_pods.go:89] "registry-proxy-9wsfw" [23d952c3-eba0-4788-b241-d477ed5081a1] Running
	I1205 19:37:23.057658   13952 system_pods.go:89] "snapshot-controller-58dbcc7b99-gqmd7" [87f47f40-88ec-4064-8493-13ec94933413] Running
	I1205 19:37:23.057664   13952 system_pods.go:89] "snapshot-controller-58dbcc7b99-qhmfd" [8e7a0adb-b7df-409f-978d-28c4e57c2cfb] Running
	I1205 19:37:23.057668   13952 system_pods.go:89] "storage-provisioner" [ef82e7dd-313a-447e-84be-b95404c573a6] Running
	I1205 19:37:23.057675   13952 system_pods.go:89] "tiller-deploy-7b677967b9-cdtmt" [9203256c-9bc5-49b8-8ef1-47ca632955a8] Running
	I1205 19:37:23.057680   13952 system_pods.go:126] duration metric: took 8.175097ms to wait for k8s-apps to be running ...
	I1205 19:37:23.057689   13952 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:37:23.057724   13952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:37:23.068162   13952 system_svc.go:56] duration metric: took 10.466551ms WaitForService to wait for kubelet.
	I1205 19:37:23.068183   13952 kubeadm.go:581] duration metric: took 1m15.572734214s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:37:23.068237   13952 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:37:23.070949   13952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:37:23.070981   13952 node_conditions.go:123] node cpu capacity is 8
	I1205 19:37:23.070998   13952 node_conditions.go:105] duration metric: took 2.752657ms to run NodePressure ...
	I1205 19:37:23.071014   13952 start.go:228] waiting for startup goroutines ...
	I1205 19:37:23.357684   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.856863   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.357836   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.857871   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.356548   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.857215   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.357633   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.856647   13952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.356707   13952 kapi.go:107] duration metric: took 1m12.519266151s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:37:27.358626   13952 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, inspektor-gadget, helm-tiller, ingress-dns, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1205 19:37:27.360502   13952 addons.go:502] enable addons completed in 1m19.908888017s: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass inspektor-gadget helm-tiller ingress-dns metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1205 19:37:27.360541   13952 start.go:233] waiting for cluster config update ...
	I1205 19:37:27.360560   13952 start.go:242] writing updated cluster config ...
	I1205 19:37:27.360814   13952 ssh_runner.go:195] Run: rm -f paused
	I1205 19:37:27.431274   13952 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 19:37:27.433050   13952 out.go:177] * Done! kubectl is now configured to use "addons-030936" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.600461859Z" level=info msg="Creating container: gadget/gadget-q5tnq/gadget" id=ac61dbe9-94d3-48ff-9392-1c4e3e1603a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.600544410Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:45:23 addons-030936 conmon[11441]: conmon af6486c1bcd3bbbd0e24 <nwarn>: runtime stderr: time="2023-12-05T19:45:23Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                             time="2023-12-05T19:45:23Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                             time="2023-12-05T19:45:23Z" level=warning msg="lstat : no such file or directory"
	                                             time="2023-12-05T19:45:23Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:23 addons-030936 conmon[11441]: conmon af6486c1bcd3bbbd0e24 <error>: Failed to create container: exit status 1
	Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.671565420Z" level=error msg="Container creation error: time=\"2023-12-05T19:45:23Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:45:23Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:45:23Z\" level=warning msg=\"lstat : no such file or directory\"\ntime=\"2023-12-05T19:45:23Z\" level=error msg=\"container_linux.go:380: starting container process caused: exec: \\\"/entrypoint.sh\\\": stat /entrypoint.sh: no such file or directory\"\n" id=ac61dbe9-94d3-48ff-9392-1c4e3e1603a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.679346279Z" level=info msg="createCtr: deleting container ID af6486c1bcd3bbbd0e2460c7a5666a2b7306b54902c7e8905189743e84cab2df from idIndex" id=ac61dbe9-94d3-48ff-9392-1c4e3e1603a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.679416039Z" level=info msg="createCtr: deleting container ID af6486c1bcd3bbbd0e2460c7a5666a2b7306b54902c7e8905189743e84cab2df from idIndex" id=ac61dbe9-94d3-48ff-9392-1c4e3e1603a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.679441568Z" level=info msg="createCtr: deleting container ID af6486c1bcd3bbbd0e2460c7a5666a2b7306b54902c7e8905189743e84cab2df from idIndex" id=ac61dbe9-94d3-48ff-9392-1c4e3e1603a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:23 addons-030936 crio[955]: time="2023-12-05 19:45:23.685146674Z" level=info msg="createCtr: deleting container ID af6486c1bcd3bbbd0e2460c7a5666a2b7306b54902c7e8905189743e84cab2df from idIndex" id=ac61dbe9-94d3-48ff-9392-1c4e3e1603a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.361754244Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=ad20f868-0539-4f40-9ed1-8d3ba794fdc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.361999119Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=ad20f868-0539-4f40-9ed1-8d3ba794fdc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.362587993Z" level=info msg="Pulling image: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=d0008133-d6a3-46c9-bf74-cb25bf333bb3 name=/runtime.v1.ImageService/PullImage
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.367402136Z" level=info msg="Trying to access \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931\""
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.535000202Z" level=info msg="Pulled image: ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce" id=d0008133-d6a3-46c9-bf74-cb25bf333bb3 name=/runtime.v1.ImageService/PullImage
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.535807328Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=b0ff39bd-5995-4293-b29b-69eb883894d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.536058152Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=b0ff39bd-5995-4293-b29b-69eb883894d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.537068053Z" level=info msg="Creating container: gadget/gadget-q5tnq/gadget" id=986678e4-9cb6-475c-923b-a12e36890b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.537175831Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:45:37 addons-030936 conmon[11466]: conmon 943c6796d8e49a75e50e <nwarn>: runtime stderr: time="2023-12-05T19:45:37Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                             time="2023-12-05T19:45:37Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                             time="2023-12-05T19:45:37Z" level=warning msg="lstat : no such file or directory"
	                                             time="2023-12-05T19:45:37Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:37 addons-030936 conmon[11466]: conmon 943c6796d8e49a75e50e <error>: Failed to create container: exit status 1
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.599964623Z" level=error msg="Container creation error: time=\"2023-12-05T19:45:37Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:45:37Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:45:37Z\" level=warning msg=\"lstat : no such file or directory\"\ntime=\"2023-12-05T19:45:37Z\" level=error msg=\"container_linux.go:380: starting container process caused: exec: \\\"/entrypoint.sh\\\": stat /entrypoint.sh: no such file or directory\"\n" id=986678e4-9cb6-475c-923b-a12e36890b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.606895890Z" level=info msg="createCtr: deleting container ID 943c6796d8e49a75e50e992c7b9463005896c5bbc561b21868fc24b3c64ab600 from idIndex" id=986678e4-9cb6-475c-923b-a12e36890b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.606941910Z" level=info msg="createCtr: deleting container ID 943c6796d8e49a75e50e992c7b9463005896c5bbc561b21868fc24b3c64ab600 from idIndex" id=986678e4-9cb6-475c-923b-a12e36890b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.606958750Z" level=info msg="createCtr: deleting container ID 943c6796d8e49a75e50e992c7b9463005896c5bbc561b21868fc24b3c64ab600 from idIndex" id=986678e4-9cb6-475c-923b-a12e36890b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:45:37 addons-030936 crio[955]: time="2023-12-05 19:45:37.612676790Z" level=info msg="createCtr: deleting container ID 943c6796d8e49a75e50e992c7b9463005896c5bbc561b21868fc24b3c64ab600 from idIndex" id=986678e4-9cb6-475c-923b-a12e36890b3b name=/runtime.v1.RuntimeService/CreateContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	643adf8bf1109       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7        5 minutes ago       Running             hello-world-app           0                   e360fe537bf0a       hello-world-app-5d77478584-b574q
	81e3d40cc89e5       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                7 minutes ago       Running             nginx                     0                   2d05afc766095       nginx
	92618ec38fa1e       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1          8 minutes ago       Running             headlamp                  0                   3e3827799bbaf       headlamp-777fd4b855-gcvsv
	7830c8ecdbb09       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06   8 minutes ago       Running             gcp-auth                  0                   7a207a9e766e0       gcp-auth-d4c87556c-6cghg
	97c31a99b9606       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                               9 minutes ago       Running             storage-provisioner       0                   85996e6146e22       storage-provisioner
	ea7f13f6d2b33       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                               9 minutes ago       Running             coredns                   0                   b9bf4177cfd1a       coredns-5dd5756b68-cvgxt
	6531d8dc9c00c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                               9 minutes ago       Running             kube-proxy                0                   a575b45987803       kube-proxy-kp9gj
	6e4cc5dd757fe       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                               9 minutes ago       Running             kindnet-cni               0                   045e5f0d0336a       kindnet-b6nhd
	479207e0ffc0b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                               9 minutes ago       Running             kube-scheduler            0                   5ebb0d97c335b       kube-scheduler-addons-030936
	f50b81469d1cb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                               9 minutes ago       Running             kube-controller-manager   0                   bf809c50e7e80       kube-controller-manager-addons-030936
	3aa894543b63c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                               9 minutes ago       Running             kube-apiserver            0                   65c1dff85fa93       kube-apiserver-addons-030936
	722b14928df5a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                               9 minutes ago       Running             etcd                      0                   0de43ad41b78f       etcd-addons-030936
	
	* 
	* ==> coredns [ea7f13f6d2b33d7c2a9d83802e695dab0b4fd0290ec6c25a264bb8236a7b517f] <==
	* [INFO] 10.244.0.18:50509 - 57152 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059166s
	[INFO] 10.244.0.18:38337 - 3359 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005049456s
	[INFO] 10.244.0.18:38337 - 25376 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006085754s
	[INFO] 10.244.0.18:53415 - 41055 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004658712s
	[INFO] 10.244.0.18:53415 - 46242 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005264474s
	[INFO] 10.244.0.18:48949 - 54836 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003870071s
	[INFO] 10.244.0.18:48949 - 31025 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004819636s
	[INFO] 10.244.0.18:58581 - 19132 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089878s
	[INFO] 10.244.0.18:58581 - 33471 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142612s
	[INFO] 10.244.0.20:41385 - 1325 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201173s
	[INFO] 10.244.0.20:51386 - 59491 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194943s
	[INFO] 10.244.0.20:38674 - 35812 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000247835s
	[INFO] 10.244.0.20:44380 - 1506 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088745s
	[INFO] 10.244.0.20:47769 - 61015 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088264s
	[INFO] 10.244.0.20:35353 - 9871 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008909s
	[INFO] 10.244.0.20:40292 - 10634 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.004968684s
	[INFO] 10.244.0.20:47388 - 2747 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005156347s
	[INFO] 10.244.0.20:55893 - 7862 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005191727s
	[INFO] 10.244.0.20:34632 - 55165 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005614709s
	[INFO] 10.244.0.20:41508 - 35416 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005762919s
	[INFO] 10.244.0.20:44791 - 47553 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006706816s
	[INFO] 10.244.0.20:51267 - 37381 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000746103s
	[INFO] 10.244.0.20:37084 - 31011 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000807565s
	[INFO] 10.244.0.24:53856 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100728s
	[INFO] 10.244.0.24:41023 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077437s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-030936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-030936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=addons-030936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_35_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-030936
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:35:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-030936
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:45:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:45:35 +0000   Tue, 05 Dec 2023 19:35:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:45:35 +0000   Tue, 05 Dec 2023 19:35:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:45:35 +0000   Tue, 05 Dec 2023 19:35:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:45:35 +0000   Tue, 05 Dec 2023 19:36:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-030936
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 edda52c3b88241bba284156915f715bd
	  System UUID:                a0e5de66-5ed1-48da-a989-a4190bd59d70
	  Boot ID:                    cdc0538f-6890-4ebd-b17b-f40ba8f6605f
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-b574q         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  gadget                      gadget-q5tnq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m31s
	  gcp-auth                    gcp-auth-d4c87556c-6cghg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  headlamp                    headlamp-777fd4b855-gcvsv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 coredns-5dd5756b68-cvgxt                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     9m37s
	  kube-system                 etcd-addons-030936                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m51s
	  kube-system                 kindnet-b6nhd                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m37s
	  kube-system                 kube-apiserver-addons-030936             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-controller-manager-addons-030936    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-proxy-kp9gj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 kube-scheduler-addons-030936             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m32s                  kube-proxy       
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m56s (x8 over 9m56s)  kubelet          Node addons-030936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x8 over 9m56s)  kubelet          Node addons-030936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x8 over 9m56s)  kubelet          Node addons-030936 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m50s                  kubelet          Node addons-030936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m50s                  kubelet          Node addons-030936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s                  kubelet          Node addons-030936 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m38s                  node-controller  Node addons-030936 event: Registered Node addons-030936 in Controller
	  Normal  NodeReady                9m4s                   kubelet          Node addons-030936 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.008370] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004438] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000886] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000844] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000941] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001246] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.004793] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.002458] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.213124] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 19:38] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +1.031783] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +2.015837] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +4.191662] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[  +8.195417] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[ +16.122928] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	[Dec 5 19:39] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 1e 1b 6b b8 50 92 50 01 2d 03 d0 08 00
	
	* 
	* ==> etcd [722b14928df5ad85c9a08460bc989fe0ebfa5a74e5c8977aceec83a975d3e8f5] <==
	* {"level":"info","ts":"2023-12-05T19:35:49.14835Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:35:49.148411Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:35:49.149102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-05T19:35:49.149177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T19:36:09.728641Z","caller":"traceutil/trace.go:171","msg":"trace[1335159935] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"182.848866ms","start":"2023-12-05T19:36:09.54577Z","end":"2023-12-05T19:36:09.728618Z","steps":["trace[1335159935] 'process raft request'  (duration: 182.742564ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:09.729668Z","caller":"traceutil/trace.go:171","msg":"trace[1553021988] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"101.142878ms","start":"2023-12-05T19:36:09.628513Z","end":"2023-12-05T19:36:09.729656Z","steps":["trace[1553021988] 'read index received'  (duration: 101.139132ms)","trace[1553021988] 'applied index is now lower than readState.Index'  (duration: 2.952µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T19:36:09.729745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.236923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T19:36:09.729935Z","caller":"traceutil/trace.go:171","msg":"trace[875543309] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:412; }","duration":"101.446146ms","start":"2023-12-05T19:36:09.628479Z","end":"2023-12-05T19:36:09.729925Z","steps":["trace[875543309] 'agreement among raft nodes before linearized reading'  (duration: 101.218748ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:10.343476Z","caller":"traceutil/trace.go:171","msg":"trace[208797160] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"101.061758ms","start":"2023-12-05T19:36:10.242393Z","end":"2023-12-05T19:36:10.343455Z","steps":["trace[208797160] 'process raft request'  (duration: 100.475919ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:36:10.625632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.916031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025614587148481 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3057 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T19:36:10.642466Z","caller":"traceutil/trace.go:171","msg":"trace[1937670736] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"295.240627ms","start":"2023-12-05T19:36:10.347205Z","end":"2023-12-05T19:36:10.642445Z","steps":["trace[1937670736] 'process raft request'  (duration: 82.113293ms)","trace[1937670736] 'compare'  (duration: 195.755606ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:36:10.64863Z","caller":"traceutil/trace.go:171","msg":"trace[261661201] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"222.127609ms","start":"2023-12-05T19:36:10.426491Z","end":"2023-12-05T19:36:10.648618Z","steps":["trace[261661201] 'process raft request'  (duration: 215.819341ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:10.648715Z","caller":"traceutil/trace.go:171","msg":"trace[519088703] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:430; }","duration":"107.481972ms","start":"2023-12-05T19:36:10.541227Z","end":"2023-12-05T19:36:10.648709Z","steps":["trace[519088703] 'read index received'  (duration: 31.761µs)","trace[519088703] 'applied index is now lower than readState.Index'  (duration: 107.449693ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:36:10.648847Z","caller":"traceutil/trace.go:171","msg":"trace[793529674] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"107.468793ms","start":"2023-12-05T19:36:10.541373Z","end":"2023-12-05T19:36:10.648842Z","steps":["trace[793529674] 'process raft request'  (duration: 107.041046ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:36:10.64899Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.776216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-kp9gj\" ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2023-12-05T19:36:10.649006Z","caller":"traceutil/trace.go:171","msg":"trace[246349267] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-kp9gj; range_end:; response_count:1; response_revision:422; }","duration":"107.804641ms","start":"2023-12-05T19:36:10.541197Z","end":"2023-12-05T19:36:10.649001Z","steps":["trace[246349267] 'agreement among raft nodes before linearized reading'  (duration: 107.756603ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.041773Z","caller":"traceutil/trace.go:171","msg":"trace[1613146233] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"297.968905ms","start":"2023-12-05T19:36:10.743778Z","end":"2023-12-05T19:36:11.041747Z","steps":["trace[1613146233] 'process raft request'  (duration: 290.31315ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.045571Z","caller":"traceutil/trace.go:171","msg":"trace[1179934501] linearizableReadLoop","detail":"{readStateIndex:438; appliedIndex:434; }","duration":"104.043543ms","start":"2023-12-05T19:36:10.941509Z","end":"2023-12-05T19:36:11.045553Z","steps":["trace[1179934501] 'read index received'  (duration: 92.591513ms)","trace[1179934501] 'applied index is now lower than readState.Index'  (duration: 11.45142ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:36:11.045789Z","caller":"traceutil/trace.go:171","msg":"trace[973044473] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"104.924509ms","start":"2023-12-05T19:36:10.940849Z","end":"2023-12-05T19:36:11.045773Z","steps":["trace[973044473] 'process raft request'  (duration: 104.523726ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.046004Z","caller":"traceutil/trace.go:171","msg":"trace[1715040571] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"104.693652ms","start":"2023-12-05T19:36:10.941299Z","end":"2023-12-05T19:36:11.045993Z","steps":["trace[1715040571] 'process raft request'  (duration: 104.175012ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:11.046201Z","caller":"traceutil/trace.go:171","msg":"trace[839748364] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"104.747107ms","start":"2023-12-05T19:36:10.941443Z","end":"2023-12-05T19:36:11.04619Z","steps":["trace[839748364] 'process raft request'  (duration: 104.073554ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:36:11.046326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.814607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T19:36:11.046352Z","caller":"traceutil/trace.go:171","msg":"trace[1795452498] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:426; }","duration":"104.853548ms","start":"2023-12-05T19:36:10.94149Z","end":"2023-12-05T19:36:11.046343Z","steps":["trace[1795452498] 'agreement among raft nodes before linearized reading'  (duration: 104.79636ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:36:51.123908Z","caller":"traceutil/trace.go:171","msg":"trace[1388172312] transaction","detail":"{read_only:false; response_revision:983; number_of_response:1; }","duration":"132.784617ms","start":"2023-12-05T19:36:50.991105Z","end":"2023-12-05T19:36:51.12389Z","steps":["trace[1388172312] 'process raft request'  (duration: 132.692754ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:37:39.559312Z","caller":"traceutil/trace.go:171","msg":"trace[1815491735] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"189.413062ms","start":"2023-12-05T19:37:39.369882Z","end":"2023-12-05T19:37:39.559295Z","steps":["trace[1815491735] 'process raft request'  (duration: 115.49124ms)","trace[1815491735] 'compare'  (duration: 73.806734ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [7830c8ecdbb0940c8ae8a1a8c94ad2811f0bba0b44bdd24d9b1c7844db1ac002] <==
	* 2023/12/05 19:37:21 GCP Auth Webhook started!
	2023/12/05 19:37:28 Ready to marshal response ...
	2023/12/05 19:37:28 Ready to write response ...
	2023/12/05 19:37:28 Ready to marshal response ...
	2023/12/05 19:37:28 Ready to write response ...
	2023/12/05 19:37:36 Ready to marshal response ...
	2023/12/05 19:37:36 Ready to write response ...
	2023/12/05 19:37:37 Ready to marshal response ...
	2023/12/05 19:37:37 Ready to write response ...
	2023/12/05 19:37:39 Ready to marshal response ...
	2023/12/05 19:37:39 Ready to write response ...
	2023/12/05 19:37:39 Ready to marshal response ...
	2023/12/05 19:37:39 Ready to write response ...
	2023/12/05 19:37:39 Ready to marshal response ...
	2023/12/05 19:37:39 Ready to write response ...
	2023/12/05 19:37:51 Ready to marshal response ...
	2023/12/05 19:37:51 Ready to write response ...
	2023/12/05 19:38:06 Ready to marshal response ...
	2023/12/05 19:38:06 Ready to write response ...
	2023/12/05 19:38:29 Ready to marshal response ...
	2023/12/05 19:38:29 Ready to write response ...
	2023/12/05 19:38:31 Ready to marshal response ...
	2023/12/05 19:38:31 Ready to write response ...
	2023/12/05 19:40:12 Ready to marshal response ...
	2023/12/05 19:40:12 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:45:45 up 28 min,  0 users,  load average: 0.01, 0.20, 0.19
	Linux addons-030936 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [6e4cc5dd757fe0e273d34866f7cfcd6887921a187d372b65dbdef799748762c4] <==
	* I1205 19:43:40.591683       1 main.go:227] handling current node
	I1205 19:43:50.603811       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:43:50.603830       1 main.go:227] handling current node
	I1205 19:44:00.615472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:44:00.615493       1 main.go:227] handling current node
	I1205 19:44:10.627657       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:44:10.627692       1 main.go:227] handling current node
	I1205 19:44:20.630798       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:44:20.630822       1 main.go:227] handling current node
	I1205 19:44:30.643115       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:44:30.643140       1 main.go:227] handling current node
	I1205 19:44:40.646329       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:44:40.646350       1 main.go:227] handling current node
	I1205 19:44:50.658514       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:44:50.658539       1 main.go:227] handling current node
	I1205 19:45:00.662439       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:45:00.662462       1 main.go:227] handling current node
	I1205 19:45:10.675094       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:45:10.675118       1 main.go:227] handling current node
	I1205 19:45:20.687231       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:45:20.687252       1 main.go:227] handling current node
	I1205 19:45:30.698146       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:45:30.698168       1 main.go:227] handling current node
	I1205 19:45:40.702106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:45:40.702128       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3aa894543b63cebd464521f6f8d59bbbb7bdbbc70e1e038e3c98e8fa6dbf7646] <==
	* I1205 19:37:51.567652       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.134.83"}
	E1205 19:37:52.069717       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:38:17.624524       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1205 19:38:31.680374       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:34928: read: connection reset by peer
	I1205 19:38:47.563176       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.563241       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.569789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.569838       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.576516       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.576577       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.577427       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.577483       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.586690       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.586816       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.590877       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.590985       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.624968       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.625027       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:47.625050       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:47.625066       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:38:48.578467       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:38:48.625688       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:38:48.634765       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:40:12.375934       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.187.22"}
	I1205 19:40:51.245914       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [f50b81469d1cb53a5e41130d581320ee6a88b0c8407d60555c4f6748af7c96dc] <==
	* E1205 19:42:40.401623       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:42:43.789007       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:42:43.789040       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:42:57.764028       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:42:57.764070       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:19.659445       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:19.659488       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:35.080813       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:35.080840       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:44.415862       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:44.415894       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:49.730399       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:49.730437       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:20.113014       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:20.113051       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:21.171653       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:21.171688       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:42.803929       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:42.803962       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:55.577308       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:55.577340       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:10.759880       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:10.759912       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:18.969629       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:18.969657       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [6531d8dc9c00cff1dc6aab55817fc92e8f0a7cb7010ee036e97bafe134ab5093] <==
	* I1205 19:36:10.835797       1 server_others.go:69] "Using iptables proxy"
	I1205 19:36:11.144455       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1205 19:36:12.044292       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:36:12.130017       1 server_others.go:152] "Using iptables Proxier"
	I1205 19:36:12.130064       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 19:36:12.130074       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 19:36:12.130109       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 19:36:12.130394       1 server.go:846] "Version info" version="v1.28.4"
	I1205 19:36:12.130407       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:36:12.132027       1 config.go:188] "Starting service config controller"
	I1205 19:36:12.132041       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 19:36:12.132069       1 config.go:97] "Starting endpoint slice config controller"
	I1205 19:36:12.132074       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 19:36:12.132565       1 config.go:315] "Starting node config controller"
	I1205 19:36:12.132573       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 19:36:12.441519       1 shared_informer.go:318] Caches are synced for service config
	I1205 19:36:12.441647       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 19:36:12.536287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [479207e0ffc0b161ae14646234bceb9122cdb05e4df919bfdd1ed4c1471f6a4b] <==
	* W1205 19:35:51.430660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:35:51.430736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 19:35:51.430607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:35:51.430797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 19:35:51.431444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:51.431464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:35:51.431483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:51.431509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:35:51.431568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:35:51.431697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:35:51.431770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:35:51.431844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 19:35:51.431796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:51.431874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:51.431802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:51.431893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:51.431804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:35:51.431909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:35:52.332558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:35:52.332591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:35:52.374732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:35:52.374764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:35:52.401938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:52.401970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1205 19:35:52.927227       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 05 19:45:07 addons-030936 kubelet[1565]: time="2023-12-05T19:45:07Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:07 addons-030936 kubelet[1565]: time="2023-12-05T19:45:07Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:07 addons-030936 kubelet[1565]: E1205 19:45:07.699570    1565 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:45:07Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:07Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:07Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:45:07Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-q5tnq" podUID="56eb188a-c61d-4223-9714-57e2d393fe62"
	Dec 05 19:45:23 addons-030936 kubelet[1565]: E1205 19:45:23.685459    1565 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:45:23 addons-030936 kubelet[1565]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:45:23Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:23 addons-030936 kubelet[1565]:         time="2023-12-05T19:45:23Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:23 addons-030936 kubelet[1565]:         time="2023-12-05T19:45:23Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:23 addons-030936 kubelet[1565]:         time="2023-12-05T19:45:23Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:23 addons-030936 kubelet[1565]:  > podSandboxID="aa743815cf0743f063238fbeb6f27e1bd08cb1508cbafa26cd91e337fa4450c5"
	Dec 05 19:45:23 addons-030936 kubelet[1565]: E1205 19:45:23.685671    1565 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4d5dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-q5tnq_gadget(56eb188a-c61d-4223-9714-57e2d393fe62): CreateContainerError: container create failed: time="2023-12-05T19:45:23Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:23 addons-030936 kubelet[1565]: time="2023-12-05T19:45:23Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:23 addons-030936 kubelet[1565]: time="2023-12-05T19:45:23Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:23 addons-030936 kubelet[1565]: time="2023-12-05T19:45:23Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:23 addons-030936 kubelet[1565]: E1205 19:45:23.685736    1565 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:45:23Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:23Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:23Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:45:23Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-q5tnq" podUID="56eb188a-c61d-4223-9714-57e2d393fe62"
	Dec 05 19:45:37 addons-030936 kubelet[1565]: E1205 19:45:37.612997    1565 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:45:37 addons-030936 kubelet[1565]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:45:37Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:37 addons-030936 kubelet[1565]:         time="2023-12-05T19:45:37Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:37 addons-030936 kubelet[1565]:         time="2023-12-05T19:45:37Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:37 addons-030936 kubelet[1565]:         time="2023-12-05T19:45:37Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:37 addons-030936 kubelet[1565]:  > podSandboxID="aa743815cf0743f063238fbeb6f27e1bd08cb1508cbafa26cd91e337fa4450c5"
	Dec 05 19:45:37 addons-030936 kubelet[1565]: E1205 19:45:37.613198    1565 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4d5dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-q5tnq_gadget(56eb188a-c61d-4223-9714-57e2d393fe62): CreateContainerError: container create failed: time="2023-12-05T19:45:37Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:37 addons-030936 kubelet[1565]: time="2023-12-05T19:45:37Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:37 addons-030936 kubelet[1565]: time="2023-12-05T19:45:37Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:37 addons-030936 kubelet[1565]: time="2023-12-05T19:45:37Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:37 addons-030936 kubelet[1565]: E1205 19:45:37.613249    1565 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:45:37Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:37Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:37Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:45:37Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-q5tnq" podUID="56eb188a-c61d-4223-9714-57e2d393fe62"
	
	* 
	* ==> storage-provisioner [97c31a99b960644c16a9d6c36d39c01727dcdbdb6383c9541b6393c7220480dc] <==
	* I1205 19:36:41.560515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:36:41.571533       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:36:41.571569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:36:41.578813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:36:41.578945       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-030936_197abb5a-49ba-4131-b25a-2420040b942d!
	I1205 19:36:41.579972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b3c664c-556d-482f-8994-26b925302f65", APIVersion:"v1", ResourceVersion:"899", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-030936_197abb5a-49ba-4131-b25a-2420040b942d became leader
	I1205 19:36:41.680114       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-030936_197abb5a-49ba-4131-b25a-2420040b942d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-030936 -n addons-030936
helpers_test.go:261: (dbg) Run:  kubectl --context addons-030936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-q5tnq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-030936 describe pod gadget-q5tnq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-030936 describe pod gadget-q5tnq: exit status 1 (61.611976ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-q5tnq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-030936 describe pod gadget-q5tnq: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (482.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image load --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image load --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr: (4.647968121s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image ls: (2.473104883s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-481133" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (7.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-612238 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-612238 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.121873829s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-612238 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-612238 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [709ab86c-7a2c-423c-bf0a-be9404581a3e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [709ab86c-7a2c-423c-bf0a-be9404581a3e] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007677874s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1205 19:52:27.449417   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:52:55.133670   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-612238 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.461835633s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-612238 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.007193077s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons disable ingress-dns --alsologtostderr -v=1: (1.850972143s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons disable ingress --alsologtostderr -v=1
E1205 19:54:28.653455   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:28.658664   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:28.668889   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:28.689170   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:28.729497   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:28.809853   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:28.970267   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:29.290831   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons disable ingress --alsologtostderr -v=1: (7.404773385s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-612238
E1205 19:54:29.930949   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-612238:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575",
	        "Created": "2023-12-05T19:50:26.168096125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 55291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:50:26.469912011Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:87b04fa850a730e5ca832acdf82e6994855a857f2c65a1e9dfd36c86f13b161b",
	        "ResolvConfPath": "/var/lib/docker/containers/465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575/hostname",
	        "HostsPath": "/var/lib/docker/containers/465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575/hosts",
	        "LogPath": "/var/lib/docker/containers/465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575/465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575-json.log",
	        "Name": "/ingress-addon-legacy-612238",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-612238:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-612238",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a9606894710403636514682209276d4179f191252ad74397ad7d9d00355ec2b6-init/diff:/var/lib/docker/overlay2/8cb0dc756d42dafb4250d739248baa62eaad1aada62df117f76ff2e087cad9b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9606894710403636514682209276d4179f191252ad74397ad7d9d00355ec2b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9606894710403636514682209276d4179f191252ad74397ad7d9d00355ec2b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9606894710403636514682209276d4179f191252ad74397ad7d9d00355ec2b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-612238",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-612238/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-612238",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-612238",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-612238",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9f277ff390e15f19ebfc4f5789c0a4f423ded00742aee84ae00d8408fb78534",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f9f277ff390e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-612238": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "465f3b51444c",
	                        "ingress-addon-legacy-612238"
	                    ],
	                    "NetworkID": "21b32d73bb131131ad971c976d102f46e3f2560add1619b44c95fe1f4d87c135",
	                    "EndpointID": "a28b365ae35af53b413fb2cbf67200ab21f0b6ca8d94f3f0b8cec319ff17a713",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-612238 -n ingress-addon-legacy-612238
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 logs -n 25
E1205 19:54:31.211711   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-612238 logs -n 25: (1.070873442s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-481133                 | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-481133                 | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| service        | functional-481133 service list       | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	| update-context | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-481133 service list       | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| ssh            | functional-481133 ssh pgrep          | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-481133 image build -t     | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | localhost/my-image:functional-481133 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-481133 service            | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| service        | functional-481133                    | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-481133 service            | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| image          | functional-481133 image ls           | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	| delete         | -p functional-481133                 | functional-481133           | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:50 UTC |
	| start          | -p ingress-addon-legacy-612238       | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:50 UTC | 05 Dec 23 19:51 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-612238          | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:51 UTC | 05 Dec 23 19:51 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-612238          | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:51 UTC | 05 Dec 23 19:51 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-612238          | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:51 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-612238 ip       | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:54 UTC | 05 Dec 23 19:54 UTC |
	| addons         | ingress-addon-legacy-612238          | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:54 UTC | 05 Dec 23 19:54 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-612238          | ingress-addon-legacy-612238 | jenkins | v1.32.0 | 05 Dec 23 19:54 UTC | 05 Dec 23 19:54 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:50:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:50:13.431798   54667 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:50:13.431979   54667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:50:13.431991   54667 out.go:309] Setting ErrFile to fd 2...
	I1205 19:50:13.431998   54667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:50:13.432233   54667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 19:50:13.432898   54667 out.go:303] Setting JSON to false
	I1205 19:50:13.434017   54667 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1965,"bootTime":1701803848,"procs":446,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:50:13.434081   54667 start.go:138] virtualization: kvm guest
	I1205 19:50:13.436483   54667 out.go:177] * [ingress-addon-legacy-612238] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:50:13.438027   54667 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:50:13.438077   54667 notify.go:220] Checking for updates...
	I1205 19:50:13.441115   54667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:50:13.442714   54667 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:50:13.444272   54667 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:50:13.445717   54667 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:50:13.447315   54667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:50:13.449081   54667 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:50:13.469775   54667 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:50:13.469895   54667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:50:13.522244   54667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-05 19:50:13.513411593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:50:13.522340   54667 docker.go:295] overlay module found
	I1205 19:50:13.524398   54667 out.go:177] * Using the docker driver based on user configuration
	I1205 19:50:13.525809   54667 start.go:298] selected driver: docker
	I1205 19:50:13.525818   54667 start.go:902] validating driver "docker" against <nil>
	I1205 19:50:13.525828   54667 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:50:13.526605   54667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:50:13.576658   54667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-05 19:50:13.568970175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:50:13.576812   54667 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:50:13.577007   54667 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:50:13.579119   54667 out.go:177] * Using Docker driver with root privileges
	I1205 19:50:13.580572   54667 cni.go:84] Creating CNI manager for ""
	I1205 19:50:13.580593   54667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:50:13.580603   54667 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:50:13.580619   54667 start_flags.go:323] config:
	{Name:ingress-addon-legacy-612238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-612238 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:50:13.582128   54667 out.go:177] * Starting control plane node ingress-addon-legacy-612238 in cluster ingress-addon-legacy-612238
	I1205 19:50:13.583417   54667 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:50:13.584913   54667 out.go:177] * Pulling base image ...
	I1205 19:50:13.586374   54667 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:50:13.586457   54667 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:50:13.602276   54667 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 19:50:13.602305   54667 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1205 19:50:13.609963   54667 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1205 19:50:13.610005   54667 cache.go:56] Caching tarball of preloaded images
	I1205 19:50:13.610141   54667 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:50:13.611936   54667 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1205 19:50:13.613437   54667 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:50:13.640999   54667 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1205 19:50:17.791076   54667 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:50:17.791181   54667 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:50:18.802865   54667 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1205 19:50:18.803238   54667 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/config.json ...
	I1205 19:50:18.803272   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/config.json: {Name:mk3c0dbfed2b28097520c2f534e1721b017a257b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:18.803446   54667 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:50:18.803467   54667 start.go:365] acquiring machines lock for ingress-addon-legacy-612238: {Name:mkf5072e236bfbc95951892063cac910e557e4c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:50:18.803507   54667 start.go:369] acquired machines lock for "ingress-addon-legacy-612238" in 31.828µs
	I1205 19:50:18.803531   54667 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-612238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-612238 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:50:18.803594   54667 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:50:18.806022   54667 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1205 19:50:18.806214   54667 start.go:159] libmachine.API.Create for "ingress-addon-legacy-612238" (driver="docker")
	I1205 19:50:18.806251   54667 client.go:168] LocalClient.Create starting
	I1205 19:50:18.806322   54667 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem
	I1205 19:50:18.806353   54667 main.go:141] libmachine: Decoding PEM data...
	I1205 19:50:18.806366   54667 main.go:141] libmachine: Parsing certificate...
	I1205 19:50:18.806416   54667 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem
	I1205 19:50:18.806436   54667 main.go:141] libmachine: Decoding PEM data...
	I1205 19:50:18.806445   54667 main.go:141] libmachine: Parsing certificate...
	I1205 19:50:18.806748   54667 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-612238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:50:18.822337   54667 cli_runner.go:211] docker network inspect ingress-addon-legacy-612238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:50:18.822437   54667 network_create.go:281] running [docker network inspect ingress-addon-legacy-612238] to gather additional debugging logs...
	I1205 19:50:18.822462   54667 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-612238
	W1205 19:50:18.837775   54667 cli_runner.go:211] docker network inspect ingress-addon-legacy-612238 returned with exit code 1
	I1205 19:50:18.837805   54667 network_create.go:284] error running [docker network inspect ingress-addon-legacy-612238]: docker network inspect ingress-addon-legacy-612238: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-612238 not found
	I1205 19:50:18.837818   54667 network_create.go:286] output of [docker network inspect ingress-addon-legacy-612238]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-612238 not found
	
	** /stderr **
	I1205 19:50:18.837911   54667 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:50:18.853136   54667 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002769790}
	I1205 19:50:18.853177   54667 network_create.go:124] attempt to create docker network ingress-addon-legacy-612238 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:50:18.853221   54667 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-612238 ingress-addon-legacy-612238
	I1205 19:50:18.902623   54667 network_create.go:108] docker network ingress-addon-legacy-612238 192.168.49.0/24 created
	I1205 19:50:18.902654   54667 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-612238" container
	I1205 19:50:18.902717   54667 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:50:18.918063   54667 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-612238 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-612238 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:50:18.935213   54667 oci.go:103] Successfully created a docker volume ingress-addon-legacy-612238
	I1205 19:50:18.935311   54667 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-612238-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-612238 --entrypoint /usr/bin/test -v ingress-addon-legacy-612238:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:50:20.679593   54667 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-612238-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-612238 --entrypoint /usr/bin/test -v ingress-addon-legacy-612238:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (1.74422577s)
	I1205 19:50:20.679628   54667 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-612238
	I1205 19:50:20.679650   54667 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:50:20.679670   54667 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:50:20.679733   54667 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-612238:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:50:26.103751   54667 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-612238:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.423969488s)
	I1205 19:50:26.103793   54667 kic.go:203] duration metric: took 5.424121 seconds to extract preloaded images to volume
	W1205 19:50:26.103929   54667 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:50:26.104025   54667 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:50:26.154112   54667 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-612238 --name ingress-addon-legacy-612238 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-612238 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-612238 --network ingress-addon-legacy-612238 --ip 192.168.49.2 --volume ingress-addon-legacy-612238:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:50:26.477808   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Running}}
	I1205 19:50:26.495478   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Status}}
	I1205 19:50:26.514238   54667 cli_runner.go:164] Run: docker exec ingress-addon-legacy-612238 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:50:26.555666   54667 oci.go:144] the created container "ingress-addon-legacy-612238" has a running status.
	I1205 19:50:26.555698   54667 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa...
	I1205 19:50:26.746841   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 19:50:26.746885   54667 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:50:26.766915   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Status}}
	I1205 19:50:26.786285   54667 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:50:26.786317   54667 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-612238 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:50:26.854151   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Status}}
	I1205 19:50:26.882417   54667 machine.go:88] provisioning docker machine ...
	I1205 19:50:26.882456   54667 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-612238"
	I1205 19:50:26.882518   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:26.898785   54667 main.go:141] libmachine: Using SSH client type: native
	I1205 19:50:26.899306   54667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1205 19:50:26.899332   54667 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-612238 && echo "ingress-addon-legacy-612238" | sudo tee /etc/hostname
	I1205 19:50:27.096568   54667 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-612238
	
	I1205 19:50:27.096650   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:27.113307   54667 main.go:141] libmachine: Using SSH client type: native
	I1205 19:50:27.113624   54667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1205 19:50:27.113644   54667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-612238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-612238/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-612238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:50:27.260334   54667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:50:27.260368   54667 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 19:50:27.260400   54667 ubuntu.go:177] setting up certificates
	I1205 19:50:27.260411   54667 provision.go:83] configureAuth start
	I1205 19:50:27.260466   54667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-612238
	I1205 19:50:27.276897   54667 provision.go:138] copyHostCerts
	I1205 19:50:27.276933   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 19:50:27.276962   54667 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem, removing ...
	I1205 19:50:27.276970   54667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 19:50:27.277032   54667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 19:50:27.277132   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 19:50:27.277152   54667 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem, removing ...
	I1205 19:50:27.277156   54667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 19:50:27.277178   54667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 19:50:27.277224   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 19:50:27.277239   54667 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem, removing ...
	I1205 19:50:27.277246   54667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 19:50:27.277265   54667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 19:50:27.277310   54667 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-612238 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-612238]
	I1205 19:50:27.444538   54667 provision.go:172] copyRemoteCerts
	I1205 19:50:27.444604   54667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:50:27.444641   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:27.460888   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:50:27.552379   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:50:27.552455   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:50:27.574158   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:50:27.574219   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 19:50:27.595811   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:50:27.595878   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:50:27.616645   54667 provision.go:86] duration metric: configureAuth took 356.220393ms
	I1205 19:50:27.616676   54667 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:50:27.616868   54667 config.go:182] Loaded profile config "ingress-addon-legacy-612238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1205 19:50:27.616982   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:27.633229   54667 main.go:141] libmachine: Using SSH client type: native
	I1205 19:50:27.633600   54667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1205 19:50:27.633627   54667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:50:27.875129   54667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:50:27.875158   54667 machine.go:91] provisioned docker machine in 992.712842ms
	I1205 19:50:27.875170   54667 client.go:171] LocalClient.Create took 9.068910961s
	I1205 19:50:27.875191   54667 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-612238" took 9.068976257s
	I1205 19:50:27.875200   54667 start.go:300] post-start starting for "ingress-addon-legacy-612238" (driver="docker")
	I1205 19:50:27.875212   54667 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:50:27.875281   54667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:50:27.875331   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:27.892039   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:50:27.985814   54667 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:50:27.988972   54667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:50:27.989014   54667 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:50:27.989024   54667 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:50:27.989030   54667 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:50:27.989040   54667 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 19:50:27.989117   54667 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 19:50:27.989221   54667 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> 128832.pem in /etc/ssl/certs
	I1205 19:50:27.989235   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> /etc/ssl/certs/128832.pem
	I1205 19:50:27.989370   54667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:50:27.997520   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /etc/ssl/certs/128832.pem (1708 bytes)
	I1205 19:50:28.019970   54667 start.go:303] post-start completed in 144.75469ms
	I1205 19:50:28.020403   54667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-612238
	I1205 19:50:28.036836   54667 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/config.json ...
	I1205 19:50:28.037077   54667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:50:28.037115   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:28.053787   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:50:28.144904   54667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:50:28.148935   54667 start.go:128] duration metric: createHost completed in 9.345329748s
	I1205 19:50:28.148959   54667 start.go:83] releasing machines lock for "ingress-addon-legacy-612238", held for 9.345441722s
	I1205 19:50:28.149012   54667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-612238
	I1205 19:50:28.164833   54667 ssh_runner.go:195] Run: cat /version.json
	I1205 19:50:28.164880   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:28.164956   54667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:50:28.165009   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:50:28.181310   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:50:28.181788   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:50:28.361446   54667 ssh_runner.go:195] Run: systemctl --version
	I1205 19:50:28.365713   54667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:50:28.503213   54667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:50:28.507495   54667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:50:28.524350   54667 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:50:28.524426   54667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:50:28.551167   54667 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:50:28.551189   54667 start.go:475] detecting cgroup driver to use...
	I1205 19:50:28.551215   54667 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:50:28.551257   54667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:50:28.564717   54667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:50:28.574632   54667 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:50:28.574682   54667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:50:28.586973   54667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:50:28.599659   54667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:50:28.681072   54667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:50:28.765380   54667 docker.go:219] disabling docker service ...
	I1205 19:50:28.765439   54667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:50:28.782906   54667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:50:28.793962   54667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:50:28.873105   54667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:50:28.952151   54667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:50:28.961953   54667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:50:28.976116   54667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 19:50:28.976219   54667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:50:28.984733   54667 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:50:28.984793   54667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:50:28.993365   54667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:50:29.001894   54667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:50:29.010405   54667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:50:29.018417   54667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:50:29.025617   54667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:50:29.032918   54667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:50:29.112274   54667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:50:29.222621   54667 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:50:29.222677   54667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:50:29.226061   54667 start.go:543] Will wait 60s for crictl version
	I1205 19:50:29.226107   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:29.229016   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:50:29.259757   54667 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:50:29.259864   54667 ssh_runner.go:195] Run: crio --version
	I1205 19:50:29.291979   54667 ssh_runner.go:195] Run: crio --version
	I1205 19:50:29.326908   54667 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1205 19:50:29.328420   54667 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-612238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:50:29.345110   54667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:50:29.348613   54667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:50:29.358349   54667 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:50:29.358398   54667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:50:29.401544   54667 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1205 19:50:29.401600   54667 ssh_runner.go:195] Run: which lz4
	I1205 19:50:29.404851   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:50:29.404927   54667 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 19:50:29.407822   54667 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:50:29.407843   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1205 19:50:30.335003   54667 crio.go:444] Took 0.930101 seconds to copy over tarball
	I1205 19:50:30.335067   54667 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:50:32.691036   54667 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355938684s)
	I1205 19:50:32.691067   54667 crio.go:451] Took 2.356040 seconds to extract the tarball
	I1205 19:50:32.691076   54667 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:50:32.759100   54667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:50:32.790402   54667 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1205 19:50:32.790424   54667 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 19:50:32.790475   54667 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:50:32.790508   54667 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:50:32.790534   54667 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:50:32.790559   54667 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:50:32.790605   54667 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1205 19:50:32.790526   54667 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:50:32.790700   54667 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:50:32.790764   54667 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1205 19:50:32.791642   54667 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:50:32.791646   54667 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 19:50:32.791698   54667 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:50:32.791650   54667 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:50:32.791723   54667 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1205 19:50:32.791672   54667 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:50:32.791795   54667 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:50:32.791930   54667 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:50:32.967788   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:50:32.973365   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:50:32.975982   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1205 19:50:32.978474   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1205 19:50:32.981426   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:50:33.031024   54667 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1205 19:50:33.031072   54667 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:50:33.031116   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.031239   54667 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1205 19:50:33.031272   54667 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:50:33.031309   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.038548   54667 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1205 19:50:33.038592   54667 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1205 19:50:33.038634   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.038554   54667 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1205 19:50:33.038634   54667 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1205 19:50:33.038692   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:50:33.038702   54667 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:50:33.038724   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.038676   54667 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:50:33.038752   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:50:33.038768   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.045095   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 19:50:33.074546   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1205 19:50:33.074689   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:50:33.074838   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1205 19:50:33.074907   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1205 19:50:33.075010   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1205 19:50:33.134333   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:50:33.136718   54667 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 19:50:33.136766   54667 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 19:50:33.136806   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.183907   54667 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:50:33.224804   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1205 19:50:33.262450   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1205 19:50:33.262494   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1205 19:50:33.337206   54667 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1205 19:50:33.337255   54667 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:50:33.337264   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 19:50:33.337292   54667 ssh_runner.go:195] Run: which crictl
	I1205 19:50:33.368277   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 19:50:33.368329   54667 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:50:33.399204   54667 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1205 19:50:33.399268   54667 cache_images.go:92] LoadImages completed in 608.831303ms
	W1205 19:50:33.399341   54667 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I1205 19:50:33.399432   54667 ssh_runner.go:195] Run: crio config
	I1205 19:50:33.439174   54667 cni.go:84] Creating CNI manager for ""
	I1205 19:50:33.439193   54667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:50:33.439209   54667 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:50:33.439235   54667 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-612238 NodeName:ingress-addon-legacy-612238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 19:50:33.439383   54667 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-612238"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:50:33.439478   54667 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-612238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-612238 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:50:33.439537   54667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1205 19:50:33.447453   54667 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:50:33.447518   54667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:50:33.455135   54667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1205 19:50:33.469922   54667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1205 19:50:33.485244   54667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 19:50:33.500339   54667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:50:33.503241   54667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:50:33.512533   54667 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238 for IP: 192.168.49.2
	I1205 19:50:33.512576   54667 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fbd7b27250f9a01d87d327232e4afd0539a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:33.512712   54667 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key
	I1205 19:50:33.512750   54667 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key
	I1205 19:50:33.512790   54667 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.key
	I1205 19:50:33.512804   54667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt with IP's: []
	I1205 19:50:33.636588   54667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt ...
	I1205 19:50:33.636618   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: {Name:mkcadab00d6c1b6bf12887f083a06e6422bf1276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:33.636781   54667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.key ...
	I1205 19:50:33.636794   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.key: {Name:mk56d492df7421458ea14308191939d1ca30579e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:33.636863   54667 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key.dd3b5fb2
	I1205 19:50:33.636878   54667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:50:33.800657   54667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt.dd3b5fb2 ...
	I1205 19:50:33.800687   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt.dd3b5fb2: {Name:mk4ef1c91b73e0c9193a0a668184e2d95ce66c65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:33.800835   54667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key.dd3b5fb2 ...
	I1205 19:50:33.800847   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key.dd3b5fb2: {Name:mk3d60fe8fbad05764c7a187cc510479750aea79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:33.800911   54667 certs.go:337] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt
	I1205 19:50:33.800994   54667 certs.go:341] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key
	I1205 19:50:33.801050   54667 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.key
	I1205 19:50:33.801067   54667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.crt with IP's: []
	I1205 19:50:34.015628   54667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.crt ...
	I1205 19:50:34.015660   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.crt: {Name:mk3862302e70cb05c9de9158eed03707add14685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:34.015817   54667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.key ...
	I1205 19:50:34.015831   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.key: {Name:mk2e3e37f27d680441233480f9f272b25cefd40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:50:34.015895   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:50:34.015913   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:50:34.015923   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:50:34.015935   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:50:34.015949   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:50:34.015965   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:50:34.015978   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:50:34.015990   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:50:34.016044   54667 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem (1338 bytes)
	W1205 19:50:34.016076   54667 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883_empty.pem, impossibly tiny 0 bytes
	I1205 19:50:34.016086   54667 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:50:34.016111   54667 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:50:34.016135   54667 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:50:34.016160   54667 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem (1679 bytes)
	I1205 19:50:34.016210   54667 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem (1708 bytes)
	I1205 19:50:34.016235   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:50:34.016248   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem -> /usr/share/ca-certificates/12883.pem
	I1205 19:50:34.016260   54667 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> /usr/share/ca-certificates/128832.pem
	I1205 19:50:34.016865   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:50:34.038283   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:50:34.059708   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:50:34.080729   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:50:34.101269   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:50:34.122066   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:50:34.143144   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:50:34.164105   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:50:34.184689   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:50:34.206159   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem --> /usr/share/ca-certificates/12883.pem (1338 bytes)
	I1205 19:50:34.226716   54667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /usr/share/ca-certificates/128832.pem (1708 bytes)
	I1205 19:50:34.246852   54667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:50:34.262071   54667 ssh_runner.go:195] Run: openssl version
	I1205 19:50:34.266942   54667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128832.pem && ln -fs /usr/share/ca-certificates/128832.pem /etc/ssl/certs/128832.pem"
	I1205 19:50:34.275042   54667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128832.pem
	I1205 19:50:34.278039   54667 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:46 /usr/share/ca-certificates/128832.pem
	I1205 19:50:34.278104   54667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128832.pem
	I1205 19:50:34.284545   54667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128832.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:50:34.292847   54667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:50:34.301295   54667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:50:34.304337   54667 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:50:34.304393   54667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:50:34.310449   54667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:50:34.318685   54667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12883.pem && ln -fs /usr/share/ca-certificates/12883.pem /etc/ssl/certs/12883.pem"
	I1205 19:50:34.326943   54667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12883.pem
	I1205 19:50:34.330118   54667 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:46 /usr/share/ca-certificates/12883.pem
	I1205 19:50:34.330169   54667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12883.pem
	I1205 19:50:34.336351   54667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12883.pem /etc/ssl/certs/51391683.0"
	I1205 19:50:34.345059   54667 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:50:34.348110   54667 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:50:34.348167   54667 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-612238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-612238 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:50:34.348269   54667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:50:34.348313   54667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:50:34.380128   54667 cri.go:89] found id: ""
	I1205 19:50:34.380186   54667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:50:34.388171   54667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:50:34.397849   54667 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:50:34.397909   54667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:50:34.405776   54667 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:50:34.405820   54667 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:50:34.447499   54667 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1205 19:50:34.447572   54667 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:50:34.483849   54667 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:50:34.483912   54667 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1205 19:50:34.483940   54667 kubeadm.go:322] OS: Linux
	I1205 19:50:34.483974   54667 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:50:34.484018   54667 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:50:34.484079   54667 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:50:34.484181   54667 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:50:34.484279   54667 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:50:34.484352   54667 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:50:34.548784   54667 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:50:34.548973   54667 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:50:34.549112   54667 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:50:34.722858   54667 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:50:34.723802   54667 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:50:34.723909   54667 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:50:34.796072   54667 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:50:34.799022   54667 out.go:204]   - Generating certificates and keys ...
	I1205 19:50:34.799150   54667 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:50:34.799265   54667 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:50:34.894778   54667 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:50:35.042492   54667 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:50:35.180943   54667 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:50:35.387918   54667 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:50:35.534347   54667 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:50:35.534517   54667 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-612238 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:50:35.611096   54667 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:50:35.611252   54667 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-612238 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:50:35.873207   54667 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:50:35.948929   54667 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:50:36.467528   54667 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:50:36.467651   54667 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:50:36.834238   54667 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:50:37.211403   54667 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:50:37.281528   54667 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:50:37.422101   54667 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:50:37.422730   54667 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:50:37.424838   54667 out.go:204]   - Booting up control plane ...
	I1205 19:50:37.424932   54667 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:50:37.429110   54667 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:50:37.429984   54667 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:50:37.430634   54667 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:50:37.432662   54667 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:50:44.435094   54667 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002377 seconds
	I1205 19:50:44.435262   54667 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:50:44.446630   54667 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:50:44.961758   54667 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:50:44.961969   54667 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-612238 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 19:50:45.469006   54667 kubeadm.go:322] [bootstrap-token] Using token: js7msd.oc5z4dq9dunfrf08
	I1205 19:50:45.470453   54667 out.go:204]   - Configuring RBAC rules ...
	I1205 19:50:45.470618   54667 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:50:45.475810   54667 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:50:45.481694   54667 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:50:45.483476   54667 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:50:45.485174   54667 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:50:45.487032   54667 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:50:45.493882   54667 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:50:45.653260   54667 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:50:45.885402   54667 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:50:45.886385   54667 kubeadm.go:322] 
	I1205 19:50:45.886500   54667 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:50:45.886511   54667 kubeadm.go:322] 
	I1205 19:50:45.886614   54667 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:50:45.886631   54667 kubeadm.go:322] 
	I1205 19:50:45.886661   54667 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:50:45.886760   54667 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:50:45.886839   54667 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:50:45.886850   54667 kubeadm.go:322] 
	I1205 19:50:45.886918   54667 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:50:45.887035   54667 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:50:45.887140   54667 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:50:45.887154   54667 kubeadm.go:322] 
	I1205 19:50:45.887269   54667 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:50:45.887386   54667 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:50:45.887395   54667 kubeadm.go:322] 
	I1205 19:50:45.887506   54667 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token js7msd.oc5z4dq9dunfrf08 \
	I1205 19:50:45.887649   54667 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de \
	I1205 19:50:45.887690   54667 kubeadm.go:322]     --control-plane 
	I1205 19:50:45.887710   54667 kubeadm.go:322] 
	I1205 19:50:45.887809   54667 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:50:45.887822   54667 kubeadm.go:322] 
	I1205 19:50:45.887910   54667 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token js7msd.oc5z4dq9dunfrf08 \
	I1205 19:50:45.888063   54667 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de 
	I1205 19:50:45.889832   54667 kubeadm.go:322] W1205 19:50:34.447026    1384 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1205 19:50:45.890122   54667 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1205 19:50:45.890278   54667 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:50:45.890444   54667 kubeadm.go:322] W1205 19:50:37.428826    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1205 19:50:45.890656   54667 kubeadm.go:322] W1205 19:50:37.429805    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1205 19:50:45.890686   54667 cni.go:84] Creating CNI manager for ""
	I1205 19:50:45.890699   54667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:50:45.892708   54667 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:50:45.894165   54667 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:50:45.897844   54667 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1205 19:50:45.897860   54667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:50:45.913715   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:50:46.357837   54667 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:50:46.357942   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=ingress-addon-legacy-612238 minikube.k8s.io/updated_at=2023_12_05T19_50_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:46.357945   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:46.449395   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:46.459524   54667 ops.go:34] apiserver oom_adj: -16
	I1205 19:50:46.542496   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:47.108559   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:47.607977   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:48.108738   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:48.608065   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:49.108758   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:49.608679   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:50.108676   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:50.608059   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:51.108291   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:51.608863   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:52.108813   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:52.608678   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:53.108763   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:53.608816   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:54.108299   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:54.607962   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:55.108874   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:55.608703   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:56.108246   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:56.608836   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:57.107952   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:57.607902   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:58.108597   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:58.608854   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:59.107989   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:50:59.608233   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:51:00.108600   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:51:00.608230   54667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:51:00.671720   54667 kubeadm.go:1088] duration metric: took 14.313843916s to wait for elevateKubeSystemPrivileges.
	I1205 19:51:00.671765   54667 kubeadm.go:406] StartCluster complete in 26.323600169s
	I1205 19:51:00.671789   54667 settings.go:142] acquiring lock: {Name:mkfaf26f24f59aefb8a41893ed2faf05d01ae7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:51:00.671858   54667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:51:00.672647   54667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/kubeconfig: {Name:mk1f41ec1ae8a6de6a6de4f641695e135340252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:51:00.672891   54667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:51:00.672976   54667 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 19:51:00.673090   54667 config.go:182] Loaded profile config "ingress-addon-legacy-612238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1205 19:51:00.673139   54667 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-612238"
	I1205 19:51:00.673167   54667 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-612238"
	I1205 19:51:00.673180   54667 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-612238"
	I1205 19:51:00.673214   54667 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-612238"
	I1205 19:51:00.673232   54667 host.go:66] Checking if "ingress-addon-legacy-612238" exists ...
	I1205 19:51:00.673608   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Status}}
	I1205 19:51:00.673584   54667 kapi.go:59] client config for ingress-addon-legacy-612238: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:51:00.673727   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Status}}
	I1205 19:51:00.674346   54667 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 19:51:00.689995   54667 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-612238" context rescaled to 1 replicas
	I1205 19:51:00.690040   54667 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:51:00.692096   54667 out.go:177] * Verifying Kubernetes components...
	I1205 19:51:00.693537   54667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:51:00.697357   54667 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:51:00.695995   54667 kapi.go:59] client config for ingress-addon-legacy-612238: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:51:00.698956   54667 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:51:00.698979   54667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:51:00.699034   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:51:00.699098   54667 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-612238"
	I1205 19:51:00.699142   54667 host.go:66] Checking if "ingress-addon-legacy-612238" exists ...
	I1205 19:51:00.699542   54667 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-612238 --format={{.State.Status}}
	I1205 19:51:00.719534   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:51:00.723052   54667 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:51:00.723073   54667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:51:00.723136   54667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-612238
	I1205 19:51:00.747426   54667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/ingress-addon-legacy-612238/id_rsa Username:docker}
	I1205 19:51:00.827857   54667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:51:00.828562   54667 kapi.go:59] client config for ingress-addon-legacy-612238: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:51:00.828914   54667 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-612238" to be "Ready" ...
	I1205 19:51:00.947927   54667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:51:00.949422   54667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:51:01.338959   54667 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:51:01.532432   54667 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 19:51:01.533987   54667 addons.go:502] enable addons completed in 861.004663ms: enabled=[default-storageclass storage-provisioner]
	I1205 19:51:02.838511   54667 node_ready.go:58] node "ingress-addon-legacy-612238" has status "Ready":"False"
	I1205 19:51:05.339202   54667 node_ready.go:58] node "ingress-addon-legacy-612238" has status "Ready":"False"
	I1205 19:51:06.508402   54667 node_ready.go:49] node "ingress-addon-legacy-612238" has status "Ready":"True"
	I1205 19:51:06.508431   54667 node_ready.go:38] duration metric: took 5.679489831s waiting for node "ingress-addon-legacy-612238" to be "Ready" ...
	I1205 19:51:06.508445   54667 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:51:06.574695   54667 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4hz2q" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:08.585176   54667 pod_ready.go:102] pod "coredns-66bff467f8-4hz2q" in "kube-system" namespace has status "Ready":"False"
	I1205 19:51:11.084519   54667 pod_ready.go:102] pod "coredns-66bff467f8-4hz2q" in "kube-system" namespace has status "Ready":"False"
	I1205 19:51:13.585083   54667 pod_ready.go:102] pod "coredns-66bff467f8-4hz2q" in "kube-system" namespace has status "Ready":"False"
	I1205 19:51:16.084279   54667 pod_ready.go:92] pod "coredns-66bff467f8-4hz2q" in "kube-system" namespace has status "Ready":"True"
	I1205 19:51:16.084305   54667 pod_ready.go:81] duration metric: took 9.509582647s waiting for pod "coredns-66bff467f8-4hz2q" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.084327   54667 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.087997   54667 pod_ready.go:92] pod "etcd-ingress-addon-legacy-612238" in "kube-system" namespace has status "Ready":"True"
	I1205 19:51:16.088015   54667 pod_ready.go:81] duration metric: took 3.68235ms waiting for pod "etcd-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.088026   54667 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.091867   54667 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-612238" in "kube-system" namespace has status "Ready":"True"
	I1205 19:51:16.091892   54667 pod_ready.go:81] duration metric: took 3.859623ms waiting for pod "kube-apiserver-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.091905   54667 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.098007   54667 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-612238" in "kube-system" namespace has status "Ready":"True"
	I1205 19:51:16.098024   54667 pod_ready.go:81] duration metric: took 6.112008ms waiting for pod "kube-controller-manager-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.098032   54667 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjmtg" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.101538   54667 pod_ready.go:92] pod "kube-proxy-tjmtg" in "kube-system" namespace has status "Ready":"True"
	I1205 19:51:16.101557   54667 pod_ready.go:81] duration metric: took 3.520077ms waiting for pod "kube-proxy-tjmtg" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.101568   54667 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.279933   54667 request.go:629] Waited for 178.306004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-612238
	I1205 19:51:16.479839   54667 request.go:629] Waited for 197.356763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-612238
	I1205 19:51:16.482639   54667 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-612238" in "kube-system" namespace has status "Ready":"True"
	I1205 19:51:16.482658   54667 pod_ready.go:81] duration metric: took 381.084565ms waiting for pod "kube-scheduler-ingress-addon-legacy-612238" in "kube-system" namespace to be "Ready" ...
	I1205 19:51:16.482680   54667 pod_ready.go:38] duration metric: took 9.974211679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:51:16.482698   54667 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:51:16.482753   54667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:51:16.492961   54667 api_server.go:72] duration metric: took 15.802865769s to wait for apiserver process to appear ...
	I1205 19:51:16.492981   54667 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:51:16.492998   54667 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:51:16.497403   54667 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:51:16.498116   54667 api_server.go:141] control plane version: v1.18.20
	I1205 19:51:16.498136   54667 api_server.go:131] duration metric: took 5.148897ms to wait for apiserver health ...
	I1205 19:51:16.498146   54667 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:51:16.679450   54667 request.go:629] Waited for 181.252315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:51:16.684982   54667 system_pods.go:59] 8 kube-system pods found
	I1205 19:51:16.685013   54667 system_pods.go:61] "coredns-66bff467f8-4hz2q" [74e3fdda-165c-4256-93f6-29a2a8c760a4] Running
	I1205 19:51:16.685019   54667 system_pods.go:61] "etcd-ingress-addon-legacy-612238" [bf2deb96-ef6e-4c2e-8e0f-c9bdee447e0d] Running
	I1205 19:51:16.685023   54667 system_pods.go:61] "kindnet-rwsn2" [626e97c5-686e-4615-a87d-dcb0a44e7088] Running
	I1205 19:51:16.685027   54667 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-612238" [7813d10d-edae-4603-9be0-d731d3c831af] Running
	I1205 19:51:16.685032   54667 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-612238" [ca4df1ed-0b5e-4f2b-9966-f021c8ff0eab] Running
	I1205 19:51:16.685035   54667 system_pods.go:61] "kube-proxy-tjmtg" [9f27b6a5-0848-46e0-8e48-6a8dc04b9b59] Running
	I1205 19:51:16.685039   54667 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-612238" [494671b1-e4ea-463a-b109-e1f062d7dc04] Running
	I1205 19:51:16.685043   54667 system_pods.go:61] "storage-provisioner" [7dbe5bb1-e3ab-4c03-9f38-96ce9873e93c] Running
	I1205 19:51:16.685048   54667 system_pods.go:74] duration metric: took 186.897801ms to wait for pod list to return data ...
	I1205 19:51:16.685055   54667 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:51:16.879425   54667 request.go:629] Waited for 194.294367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:51:16.881857   54667 default_sa.go:45] found service account: "default"
	I1205 19:51:16.881884   54667 default_sa.go:55] duration metric: took 196.820789ms for default service account to be created ...
	I1205 19:51:16.881892   54667 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:51:17.080259   54667 request.go:629] Waited for 198.312959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:51:17.085571   54667 system_pods.go:86] 8 kube-system pods found
	I1205 19:51:17.085600   54667 system_pods.go:89] "coredns-66bff467f8-4hz2q" [74e3fdda-165c-4256-93f6-29a2a8c760a4] Running
	I1205 19:51:17.085608   54667 system_pods.go:89] "etcd-ingress-addon-legacy-612238" [bf2deb96-ef6e-4c2e-8e0f-c9bdee447e0d] Running
	I1205 19:51:17.085613   54667 system_pods.go:89] "kindnet-rwsn2" [626e97c5-686e-4615-a87d-dcb0a44e7088] Running
	I1205 19:51:17.085618   54667 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-612238" [7813d10d-edae-4603-9be0-d731d3c831af] Running
	I1205 19:51:17.085625   54667 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-612238" [ca4df1ed-0b5e-4f2b-9966-f021c8ff0eab] Running
	I1205 19:51:17.085635   54667 system_pods.go:89] "kube-proxy-tjmtg" [9f27b6a5-0848-46e0-8e48-6a8dc04b9b59] Running
	I1205 19:51:17.085642   54667 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-612238" [494671b1-e4ea-463a-b109-e1f062d7dc04] Running
	I1205 19:51:17.085650   54667 system_pods.go:89] "storage-provisioner" [7dbe5bb1-e3ab-4c03-9f38-96ce9873e93c] Running
	I1205 19:51:17.085662   54667 system_pods.go:126] duration metric: took 203.763345ms to wait for k8s-apps to be running ...
	I1205 19:51:17.085675   54667 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:51:17.085719   54667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:51:17.096148   54667 system_svc.go:56] duration metric: took 10.466739ms WaitForService to wait for kubelet.
	I1205 19:51:17.096186   54667 kubeadm.go:581] duration metric: took 16.406100059s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:51:17.096219   54667 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:51:17.279562   54667 request.go:629] Waited for 183.261211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1205 19:51:17.282272   54667 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 19:51:17.282301   54667 node_conditions.go:123] node cpu capacity is 8
	I1205 19:51:17.282314   54667 node_conditions.go:105] duration metric: took 186.090357ms to run NodePressure ...
	I1205 19:51:17.282327   54667 start.go:228] waiting for startup goroutines ...
	I1205 19:51:17.282336   54667 start.go:233] waiting for cluster config update ...
	I1205 19:51:17.282348   54667 start.go:242] writing updated cluster config ...
	I1205 19:51:17.282597   54667 ssh_runner.go:195] Run: rm -f paused
	I1205 19:51:17.328158   54667 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1205 19:51:17.330127   54667 out.go:177] 
	W1205 19:51:17.331644   54667 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1205 19:51:17.333134   54667 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1205 19:51:17.334969   54667 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-612238" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 19:54:07 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:07.939115375Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-s9qww/hello-world-app" id=ad31c012-061f-449c-96cf-1a8f3fc2dbbf name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 05 19:54:07 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:07.939246219Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:54:08 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:08.032464271Z" level=info msg="Created container 577d8232164ada70982480c5742abab15c14a1dfd04bb6e489783925651a933e: default/hello-world-app-5f5d8b66bb-s9qww/hello-world-app" id=ad31c012-061f-449c-96cf-1a8f3fc2dbbf name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 05 19:54:08 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:08.033035286Z" level=info msg="Starting container: 577d8232164ada70982480c5742abab15c14a1dfd04bb6e489783925651a933e" id=eab7b09b-803a-4f48-bf54-29043fdbff5d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Dec 05 19:54:08 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:08.043033654Z" level=info msg="Started container" PID=4874 containerID=577d8232164ada70982480c5742abab15c14a1dfd04bb6e489783925651a933e description=default/hello-world-app-5f5d8b66bb-s9qww/hello-world-app id=eab7b09b-803a-4f48-bf54-29043fdbff5d name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=f9bcc058566f254952a3d977ed72f534fee05c79a3e4f0a1c94ca621a05208e0
	Dec 05 19:54:20 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:20.035204634Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=f589f978-dbc4-4c46-8771-67bad410c8d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 05 19:54:22 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:22.035013117Z" level=info msg="Stopping pod sandbox: f7c286e3e14aec0710a56aaf79020100d5adec5fcee32e436b6810dc5b286710" id=e2bfa204-a7c8-4e74-8b04-709757a09eb4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:54:22 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:22.035952548Z" level=info msg="Stopped pod sandbox: f7c286e3e14aec0710a56aaf79020100d5adec5fcee32e436b6810dc5b286710" id=e2bfa204-a7c8-4e74-8b04-709757a09eb4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:54:23 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:23.271810716Z" level=info msg="Stopping container: bf4a8fe7d329a03c03aa22664c0a27fc9c3c7dbd95b660e16adf4cd38883c15a (timeout: 2s)" id=d3fa1cd4-14a0-4491-9690-997034178f29 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:54:23 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:23.272330684Z" level=info msg="Stopping container: bf4a8fe7d329a03c03aa22664c0a27fc9c3c7dbd95b660e16adf4cd38883c15a (timeout: 2s)" id=a26ee45c-f814-495a-99a5-8095394c8ab8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.281790897Z" level=warning msg="Stopping container bf4a8fe7d329a03c03aa22664c0a27fc9c3c7dbd95b660e16adf4cd38883c15a with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d3fa1cd4-14a0-4491-9690-997034178f29 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:54:25 ingress-addon-legacy-612238 conmon[3413]: conmon bf4a8fe7d329a03c03aa <ninfo>: container 3424 exited with status 137
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.442324616Z" level=info msg="Stopped container bf4a8fe7d329a03c03aa22664c0a27fc9c3c7dbd95b660e16adf4cd38883c15a: ingress-nginx/ingress-nginx-controller-7fcf777cb7-t5d5g/controller" id=d3fa1cd4-14a0-4491-9690-997034178f29 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.442821856Z" level=info msg="Stopped container bf4a8fe7d329a03c03aa22664c0a27fc9c3c7dbd95b660e16adf4cd38883c15a: ingress-nginx/ingress-nginx-controller-7fcf777cb7-t5d5g/controller" id=a26ee45c-f814-495a-99a5-8095394c8ab8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.442926123Z" level=info msg="Stopping pod sandbox: 02048a1656c7428e326442c305385274271e23544cf27ce99a8bb3b06cdcf31f" id=cf9ff206-c237-4271-94ad-fd9cc82b3390 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.443275458Z" level=info msg="Stopping pod sandbox: 02048a1656c7428e326442c305385274271e23544cf27ce99a8bb3b06cdcf31f" id=ee4d7424-44dd-4d19-8362-6f373e50e12d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.445810508Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-JKEL2O3SF7RHKZDM - [0:0]\n:KUBE-HP-G2KJMX6CDIIFUGHC - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-G2KJMX6CDIIFUGHC\n-X KUBE-HP-JKEL2O3SF7RHKZDM\nCOMMIT\n"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.447111411Z" level=info msg="Closing host port tcp:80"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.447149956Z" level=info msg="Closing host port tcp:443"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.448089007Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.448107632Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.448261649Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-t5d5g Namespace:ingress-nginx ID:02048a1656c7428e326442c305385274271e23544cf27ce99a8bb3b06cdcf31f UID:4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec NetNS:/var/run/netns/4e64d4a2-c814-4d43-bd98-8b7777970f55 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.448385565Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-t5d5g from CNI network \"kindnet\" (type=ptp)"
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.489665375Z" level=info msg="Stopped pod sandbox: 02048a1656c7428e326442c305385274271e23544cf27ce99a8bb3b06cdcf31f" id=cf9ff206-c237-4271-94ad-fd9cc82b3390 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:54:25 ingress-addon-legacy-612238 crio[962]: time="2023-12-05 19:54:25.489780172Z" level=info msg="Stopped pod sandbox (already stopped): 02048a1656c7428e326442c305385274271e23544cf27ce99a8bb3b06cdcf31f" id=ee4d7424-44dd-4d19-8362-6f373e50e12d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	577d8232164ad       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            22 seconds ago      Running             hello-world-app           0                   f9bcc058566f2       hello-world-app-5f5d8b66bb-s9qww
	3e887c8703d8c       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   a9797cd352dc7       nginx
	bf4a8fe7d329a       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   02048a1656c74       ingress-nginx-controller-7fcf777cb7-t5d5g
	78c688d74b8bd       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   7a2f4922cdd51       ingress-nginx-admission-patch-p8gv9
	2d2b76ab75823       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   881f0f16c0993       ingress-nginx-admission-create-99kbv
	90093314ea332       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   9031c7e7eff06       storage-provisioner
	187960cd93f70       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   41652d320333d       coredns-66bff467f8-4hz2q
	f39d9a0e2d684       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   5413a723bce28       kindnet-rwsn2
	91b26c5efa9bd       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   0ce5dc199a3a1       kube-proxy-tjmtg
	28075c4ddafd5       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   5a4830a9a9e95       kube-scheduler-ingress-addon-legacy-612238
	74d858f00f561       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   f2479371fc383       kube-controller-manager-ingress-addon-legacy-612238
	70928eabee653       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   b9269faa35525       kube-apiserver-ingress-addon-legacy-612238
	8616ae35aa569       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   e1a0e6c66a477       etcd-ingress-addon-legacy-612238
	
	* 
	* ==> coredns [187960cd93f70df587b86cb07bb97b628542a0639c74762c74834d476ff127e8] <==
	* [INFO] 10.244.0.5:51431 - 18503 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005440268s
	[INFO] 10.244.0.5:60833 - 31051 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003906784s
	[INFO] 10.244.0.5:47554 - 38063 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003893309s
	[INFO] 10.244.0.5:54915 - 34697 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003902973s
	[INFO] 10.244.0.5:38773 - 47426 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003815179s
	[INFO] 10.244.0.5:40833 - 47189 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004177343s
	[INFO] 10.244.0.5:36670 - 26818 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004084715s
	[INFO] 10.244.0.5:51098 - 11650 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003904659s
	[INFO] 10.244.0.5:51431 - 21481 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003946641s
	[INFO] 10.244.0.5:60833 - 16739 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00343141s
	[INFO] 10.244.0.5:47554 - 25304 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003186591s
	[INFO] 10.244.0.5:40833 - 18302 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003136355s
	[INFO] 10.244.0.5:38773 - 31615 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003246135s
	[INFO] 10.244.0.5:36670 - 13869 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003268499s
	[INFO] 10.244.0.5:51431 - 54222 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003370503s
	[INFO] 10.244.0.5:47554 - 40486 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006055s
	[INFO] 10.244.0.5:51098 - 892 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003345404s
	[INFO] 10.244.0.5:60833 - 53381 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000192811s
	[INFO] 10.244.0.5:38773 - 46313 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059854s
	[INFO] 10.244.0.5:51431 - 34952 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052864s
	[INFO] 10.244.0.5:36670 - 16989 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000163627s
	[INFO] 10.244.0.5:54915 - 39977 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003660138s
	[INFO] 10.244.0.5:51098 - 14123 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062041s
	[INFO] 10.244.0.5:40833 - 48722 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000353824s
	[INFO] 10.244.0.5:54915 - 37932 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061409s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-612238
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-612238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=ingress-addon-legacy-612238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_50_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:50:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-612238
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:54:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:54:16 +0000   Tue, 05 Dec 2023 19:50:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:54:16 +0000   Tue, 05 Dec 2023 19:50:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:54:16 +0000   Tue, 05 Dec 2023 19:50:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:54:16 +0000   Tue, 05 Dec 2023 19:51:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-612238
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 888c2a9251bd4b6fbf4c99bdbfe10940
	  System UUID:                06ef675b-8670-4234-9d22-6fe9e4745308
	  Boot ID:                    cdc0538f-6890-4ebd-b17b-f40ba8f6605f
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-s9qww                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-4hz2q                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-612238                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kindnet-rwsn2                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-612238             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-612238    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-tjmtg                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-612238             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m45s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s  kubelet     Node ingress-addon-legacy-612238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s  kubelet     Node ingress-addon-legacy-612238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s  kubelet     Node ingress-addon-legacy-612238 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m24s  kubelet     Node ingress-addon-legacy-612238 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004954] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007949] FS-Cache: N-cookie d=00000000a3a7830d{9p.inode} n=00000000d068e7a4
	[  +0.008733] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.275652] FS-Cache: Duplicate cookie detected
	[  +0.004668] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006736] FS-Cache: O-cookie d=00000000a3a7830d{9p.inode} n=0000000022467da8
	[  +0.007351] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.005018] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=00000000a3a7830d{9p.inode} n=00000000e4e0acc4
	[  +0.008780] FS-Cache: N-key=[8] '0690130200000000'
	[Dec 5 19:50] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 19:51] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[  +1.008199] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[  +2.015878] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[Dec 5 19:52] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[  +8.187450] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[ +16.126869] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[ +32.253734] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	
	* 
	* ==> etcd [8616ae35aa56998a94f7a26083cec329b17c81ba39335bdb70cf21a2718b2f66] <==
	* raft2023/12/05 19:50:38 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-05 19:50:38.825184 W | auth: simple token is not cryptographically signed
	2023-12-05 19:50:38.830287 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-05 19:50:38.832631 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-05 19:50:38.832943 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	raft2023/12/05 19:50:38 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-05 19:50:38.833119 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-05 19:50:38.833138 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-05 19:50:38.833154 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/05 19:50:39 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/05 19:50:39 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/05 19:50:39 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/05 19:50:39 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/05 19:50:39 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-05 19:50:39.572647 I | embed: ready to serve client requests
	2023-12-05 19:50:39.572732 I | embed: ready to serve client requests
	2023-12-05 19:50:39.572866 I | etcdserver: published {Name:ingress-addon-legacy-612238 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-05 19:50:39.572895 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-05 19:50:39.573496 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-05 19:50:39.573577 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-05 19:50:39.574341 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-05 19:50:39.574517 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-05 19:51:06.506628 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/storage-provisioner.179e076a1d04ef97\" " with result "range_response_count:1 size:814" took too long (274.263927ms) to execute
	2023-12-05 19:51:06.506674 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-612238\" " with result "range_response_count:1 size:6390" took too long (169.409482ms) to execute
	2023-12-05 19:51:06.506759 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2792" took too long (274.299376ms) to execute
	
	* 
	* ==> kernel <==
	*  19:54:31 up 37 min,  0 users,  load average: 0.90, 0.67, 0.45
	Linux ingress-addon-legacy-612238 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f39d9a0e2d684a3e98e1699ff6ed0ff350ccdbeddcb3e9c63ddd128f992efc33] <==
	* I1205 19:52:23.683251       1 main.go:227] handling current node
	I1205 19:52:33.686124       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:52:33.686147       1 main.go:227] handling current node
	I1205 19:52:43.699019       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:52:43.699063       1 main.go:227] handling current node
	I1205 19:52:53.702514       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:52:53.702550       1 main.go:227] handling current node
	I1205 19:53:03.711343       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:53:03.711368       1 main.go:227] handling current node
	I1205 19:53:13.723251       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:53:13.723275       1 main.go:227] handling current node
	I1205 19:53:23.735643       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:53:23.735667       1 main.go:227] handling current node
	I1205 19:53:33.739345       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:53:33.739372       1 main.go:227] handling current node
	I1205 19:53:43.751240       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:53:43.751265       1 main.go:227] handling current node
	I1205 19:53:53.754374       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:53:53.754735       1 main.go:227] handling current node
	I1205 19:54:03.757919       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:54:03.757950       1 main.go:227] handling current node
	I1205 19:54:13.767360       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:54:13.767385       1 main.go:227] handling current node
	I1205 19:54:23.779448       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:54:23.779483       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [70928eabee6532079150f0c054686d4b587d88e8235e208215ecc2ab116dba46] <==
	* I1205 19:50:42.598814       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1205 19:50:42.602962       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1205 19:50:42.698307       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1205 19:50:42.698360       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:50:42.724439       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1205 19:50:42.724439       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 19:50:42.724464       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 19:50:43.597421       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1205 19:50:43.597452       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 19:50:43.602036       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1205 19:50:43.604845       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1205 19:50:43.604865       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1205 19:50:43.882470       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 19:50:43.910810       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1205 19:50:43.967419       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1205 19:50:43.968308       1 controller.go:609] quota admission added evaluator for: endpoints
	I1205 19:50:43.971168       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:50:44.334629       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:50:44.890485       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1205 19:50:45.644021       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1205 19:50:45.874900       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1205 19:51:00.711088       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1205 19:51:00.829819       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:51:18.031949       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1205 19:51:46.615797       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [74d858f00f56159f7e36ec870e87ef84d0aee9600da958c7fb7b19ee7210d386] <==
	* W1205 19:51:00.824904       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-612238. Assuming now as a timestamp.
	I1205 19:51:00.824956       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1205 19:51:00.824543       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1205 19:51:00.824574       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-612238", UID:"3a398685-0700-4715-8716-df0ab6bfdfd7", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-612238 event: Registered Node ingress-addon-legacy-612238 in Controller
	I1205 19:51:00.826394       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1205 19:51:00.836104       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"e2749565-8a1b-456f-be63-ee506c26bd41", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rwsn2
	I1205 19:51:00.838218       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"89898027-8b3e-4cb3-ae6f-08dc00755579", APIVersion:"apps/v1", ResourceVersion:"219", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-tjmtg
	I1205 19:51:00.840746       1 shared_informer.go:230] Caches are synced for resource quota 
	I1205 19:51:00.844825       1 shared_informer.go:230] Caches are synced for resource quota 
	E1205 19:51:00.858063       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"89898027-8b3e-4cb3-ae6f-08dc00755579", ResourceVersion:"219", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63837402645, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00129bb60), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc00129bb80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00129bba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001bc4a00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc00129bbc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00129bbe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00129bc20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ba45a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cdf418), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00090d3b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ea6418)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000cdf478)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1205 19:51:00.878668       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1205 19:51:00.878695       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1205 19:51:00.924656       1 shared_informer.go:230] Caches are synced for disruption 
	I1205 19:51:00.924695       1 disruption.go:339] Sending events to api server.
	I1205 19:51:00.926642       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1205 19:51:10.825516       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1205 19:51:17.997566       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"74f2bd80-2313-4422-be14-a2da20186a2b", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1205 19:51:18.031547       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"10efd376-1cbc-43ec-b16b-e3bf135b4d17", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-t5d5g
	I1205 19:51:18.039972       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0a5998b4-d6ed-4ebc-b256-1e295173050f", APIVersion:"batch/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-99kbv
	I1205 19:51:18.054490       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"92778229-b345-471f-9edf-f0b8d5663550", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-p8gv9
	I1205 19:51:20.168649       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0a5998b4-d6ed-4ebc-b256-1e295173050f", APIVersion:"batch/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1205 19:51:20.175990       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"92778229-b345-471f-9edf-f0b8d5663550", APIVersion:"batch/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1205 19:54:05.440666       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"52928278-913e-41a4-9446-0966d4559a93", APIVersion:"apps/v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1205 19:54:05.446936       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"27ee2fdf-f63a-4077-a031-a154b7275e3b", APIVersion:"apps/v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-s9qww
	
	* 
	* ==> kube-proxy [91b26c5efa9bd7ef7a44079414ed630bfcab796f0189adffa4ef9a4d95c01e4e] <==
	* W1205 19:51:01.597943       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1205 19:51:01.604416       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1205 19:51:01.604444       1 server_others.go:186] Using iptables Proxier.
	I1205 19:51:01.604654       1 server.go:583] Version: v1.18.20
	I1205 19:51:01.605037       1 config.go:315] Starting service config controller
	I1205 19:51:01.605053       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1205 19:51:01.605098       1 config.go:133] Starting endpoints config controller
	I1205 19:51:01.605112       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1205 19:51:01.705211       1 shared_informer.go:230] Caches are synced for service config 
	I1205 19:51:01.705256       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [28075c4ddafd5a65efb71e02ef726bdf85dfeb83fea1ea358f24b64ef94bcd12] <==
	* I1205 19:50:42.635260       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1205 19:50:42.637677       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1205 19:50:42.637831       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 19:50:42.637863       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 19:50:42.637900       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1205 19:50:42.640988       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:50:42.641198       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:50:42.641199       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:50:42.641724       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:50:42.641199       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:50:42.641312       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:50:42.641386       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:50:42.641451       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:50:42.641523       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:50:42.641583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:50:42.641653       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:50:42.641867       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:50:43.471153       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:50:43.569742       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:50:43.582237       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:50:43.585647       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:50:43.620743       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:50:43.664982       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:50:43.735464       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1205 19:50:45.438056       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 05 19:53:54 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:53:54.035901    1875 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:53:54 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:53:54.035937    1875 pod_workers.go:191] Error syncing pod 55a78e34-cb7a-4caa-84d5-e693445f7706 ("kube-ingress-dns-minikube_kube-system(55a78e34-cb7a-4caa-84d5-e693445f7706)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 05 19:54:05 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:05.035556    1875 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:54:05 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:05.035594    1875 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:54:05 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:05.035644    1875 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:54:05 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:05.035672    1875 pod_workers.go:191] Error syncing pod 55a78e34-cb7a-4caa-84d5-e693445f7706 ("kube-ingress-dns-minikube_kube-system(55a78e34-cb7a-4caa-84d5-e693445f7706)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 05 19:54:05 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:05.451431    1875 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 05 19:54:05 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:05.647757    1875 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dgwjn" (UniqueName: "kubernetes.io/secret/db5ab370-16a6-486a-9cb8-b7719497cb55-default-token-dgwjn") pod "hello-world-app-5f5d8b66bb-s9qww" (UID: "db5ab370-16a6-486a-9cb8-b7719497cb55")
	Dec 05 19:54:06 ingress-addon-legacy-612238 kubelet[1875]: W1205 19:54:06.093089    1875 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/465f3b51444c1169a89121f419dc1499ce76140bf91b8da0d74d0fa482eda575/crio-f9bcc058566f254952a3d977ed72f534fee05c79a3e4f0a1c94ca621a05208e0 WatchSource:0}: Error finding container f9bcc058566f254952a3d977ed72f534fee05c79a3e4f0a1c94ca621a05208e0: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000643800 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Dec 05 19:54:20 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:20.035610    1875 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:54:20 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:20.035660    1875 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:54:20 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:20.035710    1875 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:54:20 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:20.035745    1875 pod_workers.go:191] Error syncing pod 55a78e34-cb7a-4caa-84d5-e693445f7706 ("kube-ingress-dns-minikube_kube-system(55a78e34-cb7a-4caa-84d5-e693445f7706)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 05 19:54:21 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:21.284080    1875 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-lctfk" (UniqueName: "kubernetes.io/secret/55a78e34-cb7a-4caa-84d5-e693445f7706-minikube-ingress-dns-token-lctfk") pod "55a78e34-cb7a-4caa-84d5-e693445f7706" (UID: "55a78e34-cb7a-4caa-84d5-e693445f7706")
	Dec 05 19:54:21 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:21.286055    1875 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a78e34-cb7a-4caa-84d5-e693445f7706-minikube-ingress-dns-token-lctfk" (OuterVolumeSpecName: "minikube-ingress-dns-token-lctfk") pod "55a78e34-cb7a-4caa-84d5-e693445f7706" (UID: "55a78e34-cb7a-4caa-84d5-e693445f7706"). InnerVolumeSpecName "minikube-ingress-dns-token-lctfk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:54:21 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:21.384492    1875 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-lctfk" (UniqueName: "kubernetes.io/secret/55a78e34-cb7a-4caa-84d5-e693445f7706-minikube-ingress-dns-token-lctfk") on node "ingress-addon-legacy-612238" DevicePath ""
	Dec 05 19:54:23 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:23.273027    1875 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-t5d5g.179e079915e5b43d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-t5d5g", UID:"4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec", APIVersion:"v1", ResourceVersion:"455", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-612238"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc153fddbd02d1e3d, ext:217667162413, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc153fddbd02d1e3d, ext:217667162413, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-t5d5g.179e079915e5b43d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 05 19:54:23 ingress-addon-legacy-612238 kubelet[1875]: E1205 19:54:23.275870    1875 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-t5d5g.179e079915e5b43d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-t5d5g", UID:"4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec", APIVersion:"v1", ResourceVersion:"455", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-612238"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc153fddbd02d1e3d, ext:217667162413, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc153fddbd0374370, ext:217667827309, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-t5d5g.179e079915e5b43d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 05 19:54:25 ingress-addon-legacy-612238 kubelet[1875]: W1205 19:54:25.492210    1875 pod_container_deletor.go:77] Container "02048a1656c7428e326442c305385274271e23544cf27ce99a8bb3b06cdcf31f" not found in pod's containers
	Dec 05 19:54:27 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:27.434858    1875 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec-webhook-cert") pod "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec" (UID: "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec")
	Dec 05 19:54:27 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:27.434919    1875 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-bgt6l" (UniqueName: "kubernetes.io/secret/4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec-ingress-nginx-token-bgt6l") pod "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec" (UID: "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec")
	Dec 05 19:54:27 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:27.436874    1875 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec" (UID: "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:54:27 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:27.437330    1875 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec-ingress-nginx-token-bgt6l" (OuterVolumeSpecName: "ingress-nginx-token-bgt6l") pod "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec" (UID: "4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec"). InnerVolumeSpecName "ingress-nginx-token-bgt6l". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:54:27 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:27.535247    1875 reconciler.go:319] Volume detached for volume "ingress-nginx-token-bgt6l" (UniqueName: "kubernetes.io/secret/4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec-ingress-nginx-token-bgt6l") on node "ingress-addon-legacy-612238" DevicePath ""
	Dec 05 19:54:27 ingress-addon-legacy-612238 kubelet[1875]: I1205 19:54:27.535287    1875 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4ceb9d3a-0bed-4bac-b9d7-d5e35ad651ec-webhook-cert") on node "ingress-addon-legacy-612238" DevicePath ""
	
	* 
	* ==> storage-provisioner [90093314ea3325db3689c94442267d05a9d52be06aa8d7f5516963684cb8442b] <==
	* I1205 19:51:11.283852       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:51:11.291965       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:51:11.292002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:51:11.297641       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:51:11.297691       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19ebb267-6835-4339-b8a1-812fe19295c7", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-612238_8db8e0b7-359b-4867-8495-a717480fc167 became leader
	I1205 19:51:11.297780       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-612238_8db8e0b7-359b-4867-8495-a717480fc167!
	I1205 19:51:11.398046       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-612238_8db8e0b7-359b-4867-8495-a717480fc167!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-612238 -n ingress-addon-legacy-612238
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-612238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-fcrbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-fcrbt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-fcrbt -- sh -c "ping -c 1 192.168.58.1": exit status 1 (195.891525ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-fcrbt): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-pl2b5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-pl2b5 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-pl2b5 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (186.485216ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-pl2b5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-340918
helpers_test.go:235: (dbg) docker inspect multinode-340918:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7",
	        "Created": "2023-12-05T19:59:44.047245988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 101052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:59:44.339148048Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:87b04fa850a730e5ca832acdf82e6994855a857f2c65a1e9dfd36c86f13b161b",
	        "ResolvConfPath": "/var/lib/docker/containers/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/hostname",
	        "HostsPath": "/var/lib/docker/containers/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/hosts",
	        "LogPath": "/var/lib/docker/containers/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7-json.log",
	        "Name": "/multinode-340918",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-340918:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-340918",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1a7b54948569b1be6c6126ffdf80d5f6372870e3d78ed7f4c96398b785341ca8-init/diff:/var/lib/docker/overlay2/8cb0dc756d42dafb4250d739248baa62eaad1aada62df117f76ff2e087cad9b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a7b54948569b1be6c6126ffdf80d5f6372870e3d78ed7f4c96398b785341ca8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a7b54948569b1be6c6126ffdf80d5f6372870e3d78ed7f4c96398b785341ca8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a7b54948569b1be6c6126ffdf80d5f6372870e3d78ed7f4c96398b785341ca8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-340918",
	                "Source": "/var/lib/docker/volumes/multinode-340918/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-340918",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-340918",
	                "name.minikube.sigs.k8s.io": "multinode-340918",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e6981806a9d2520e06bfe6413070c7e2b05ad4f8db88fe483c1b8769dd53358",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4e6981806a9d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-340918": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "863592a132d8",
	                        "multinode-340918"
	                    ],
	                    "NetworkID": "2430e4504d08553f0cc1de7cab602f837a8077d8d228f2deae4a00921d2202df",
	                    "EndpointID": "d7ee6d1ecac32ec006829135b52c11e39a80b6b48b031c867fd31253914c7696",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-340918 -n multinode-340918
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-340918 logs -n 25: (1.233409905s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-596112                           | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-596112 ssh -- ls                    | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-581760                           | mount-start-1-581760 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-596112 ssh -- ls                    | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-596112                           | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	| start   | -p mount-start-2-596112                           | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	| ssh     | mount-start-2-596112 ssh -- ls                    | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-596112                           | mount-start-2-596112 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	| delete  | -p mount-start-1-581760                           | mount-start-1-581760 | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 19:59 UTC |
	| start   | -p multinode-340918                               | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 19:59 UTC | 05 Dec 23 20:01 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- apply -f                   | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- rollout                    | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- get pods -o                | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- get pods -o                | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-fcrbt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-pl2b5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-fcrbt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-pl2b5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-fcrbt -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-pl2b5 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- get pods -o                | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-fcrbt                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC |                     |
	|         | busybox-5bc68d56bd-fcrbt -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC | 05 Dec 23 20:01 UTC |
	|         | busybox-5bc68d56bd-pl2b5                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-340918 -- exec                       | multinode-340918     | jenkins | v1.32.0 | 05 Dec 23 20:01 UTC |                     |
	|         | busybox-5bc68d56bd-pl2b5 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:59:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:59:37.956793  100448 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:59:37.957043  100448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:59:37.957051  100448 out.go:309] Setting ErrFile to fd 2...
	I1205 19:59:37.957056  100448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:59:37.957253  100448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 19:59:37.957836  100448 out.go:303] Setting JSON to false
	I1205 19:59:37.958822  100448 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2530,"bootTime":1701803848,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:59:37.958894  100448 start.go:138] virtualization: kvm guest
	I1205 19:59:37.961436  100448 out.go:177] * [multinode-340918] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:59:37.963343  100448 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:59:37.965003  100448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:59:37.963391  100448 notify.go:220] Checking for updates...
	I1205 19:59:37.967905  100448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:59:37.969276  100448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:59:37.970882  100448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:59:37.972371  100448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:59:37.973907  100448 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:59:37.995534  100448 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:59:37.995664  100448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:59:38.046697  100448 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-05 19:59:38.038006616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:59:38.046793  100448 docker.go:295] overlay module found
	I1205 19:59:38.048995  100448 out.go:177] * Using the docker driver based on user configuration
	I1205 19:59:38.051109  100448 start.go:298] selected driver: docker
	I1205 19:59:38.051121  100448 start.go:902] validating driver "docker" against <nil>
	I1205 19:59:38.051131  100448 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:59:38.051868  100448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:59:38.103482  100448 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-05 19:59:38.095027896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:59:38.103656  100448 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:59:38.103928  100448 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:59:38.105854  100448 out.go:177] * Using Docker driver with root privileges
	I1205 19:59:38.107552  100448 cni.go:84] Creating CNI manager for ""
	I1205 19:59:38.107569  100448 cni.go:136] 0 nodes found, recommending kindnet
	I1205 19:59:38.107580  100448 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:59:38.107588  100448 start_flags.go:323] config:
	{Name:multinode-340918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:59:38.109223  100448 out.go:177] * Starting control plane node multinode-340918 in cluster multinode-340918
	I1205 19:59:38.110861  100448 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:59:38.112326  100448 out.go:177] * Pulling base image ...
	I1205 19:59:38.113658  100448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:59:38.113693  100448 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:59:38.113702  100448 cache.go:56] Caching tarball of preloaded images
	I1205 19:59:38.113763  100448 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:59:38.113790  100448 preload.go:174] Found /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:59:38.113801  100448 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:59:38.114174  100448 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/config.json ...
	I1205 19:59:38.114233  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/config.json: {Name:mkfc87859b6fdd180828217f5fa500aef3da3655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:38.129312  100448 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 19:59:38.129336  100448 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1205 19:59:38.129358  100448 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:59:38.129394  100448 start.go:365] acquiring machines lock for multinode-340918: {Name:mk0e347eed8c61c6d1c184fb803426e8f4bb4e90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:59:38.129487  100448 start.go:369] acquired machines lock for "multinode-340918" in 76.778µs
	I1205 19:59:38.129508  100448 start.go:93] Provisioning new machine with config: &{Name:multinode-340918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:59:38.129642  100448 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:59:38.131871  100448 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1205 19:59:38.132187  100448 start.go:159] libmachine.API.Create for "multinode-340918" (driver="docker")
	I1205 19:59:38.132241  100448 client.go:168] LocalClient.Create starting
	I1205 19:59:38.132307  100448 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem
	I1205 19:59:38.132355  100448 main.go:141] libmachine: Decoding PEM data...
	I1205 19:59:38.132379  100448 main.go:141] libmachine: Parsing certificate...
	I1205 19:59:38.132444  100448 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem
	I1205 19:59:38.132469  100448 main.go:141] libmachine: Decoding PEM data...
	I1205 19:59:38.132484  100448 main.go:141] libmachine: Parsing certificate...
	I1205 19:59:38.132913  100448 cli_runner.go:164] Run: docker network inspect multinode-340918 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:59:38.148635  100448 cli_runner.go:211] docker network inspect multinode-340918 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:59:38.148708  100448 network_create.go:281] running [docker network inspect multinode-340918] to gather additional debugging logs...
	I1205 19:59:38.148746  100448 cli_runner.go:164] Run: docker network inspect multinode-340918
	W1205 19:59:38.163828  100448 cli_runner.go:211] docker network inspect multinode-340918 returned with exit code 1
	I1205 19:59:38.163863  100448 network_create.go:284] error running [docker network inspect multinode-340918]: docker network inspect multinode-340918: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-340918 not found
	I1205 19:59:38.163878  100448 network_create.go:286] output of [docker network inspect multinode-340918]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-340918 not found
	
	** /stderr **
	I1205 19:59:38.163956  100448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:59:38.180010  100448 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a8b9e5e1115 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ee:37:c6:e6} reservation:<nil>}
	I1205 19:59:38.180477  100448 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002821ff0}
	I1205 19:59:38.180511  100448 network_create.go:124] attempt to create docker network multinode-340918 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1205 19:59:38.180550  100448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-340918 multinode-340918
	I1205 19:59:38.232601  100448 network_create.go:108] docker network multinode-340918 192.168.58.0/24 created
	I1205 19:59:38.232628  100448 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-340918" container
	I1205 19:59:38.232681  100448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:59:38.248277  100448 cli_runner.go:164] Run: docker volume create multinode-340918 --label name.minikube.sigs.k8s.io=multinode-340918 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:59:38.265477  100448 oci.go:103] Successfully created a docker volume multinode-340918
	I1205 19:59:38.265545  100448 cli_runner.go:164] Run: docker run --rm --name multinode-340918-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-340918 --entrypoint /usr/bin/test -v multinode-340918:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:59:38.740278  100448 oci.go:107] Successfully prepared a docker volume multinode-340918
	I1205 19:59:38.740317  100448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:59:38.740337  100448 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:59:38.740403  100448 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-340918:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:59:43.975965  100448 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-340918:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.235510922s)
	I1205 19:59:43.976000  100448 kic.go:203] duration metric: took 5.235658 seconds to extract preloaded images to volume
	W1205 19:59:43.976176  100448 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:59:43.976326  100448 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:59:44.031643  100448 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-340918 --name multinode-340918 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-340918 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-340918 --network multinode-340918 --ip 192.168.58.2 --volume multinode-340918:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:59:44.347637  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Running}}
	I1205 19:59:44.366411  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 19:59:44.386103  100448 cli_runner.go:164] Run: docker exec multinode-340918 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:59:44.426096  100448 oci.go:144] the created container "multinode-340918" has a running status.
	I1205 19:59:44.426137  100448 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa...
	I1205 19:59:44.579281  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 19:59:44.579328  100448 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:59:44.600865  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 19:59:44.617871  100448 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:59:44.617898  100448 kic_runner.go:114] Args: [docker exec --privileged multinode-340918 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:59:44.685865  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 19:59:44.706374  100448 machine.go:88] provisioning docker machine ...
	I1205 19:59:44.706432  100448 ubuntu.go:169] provisioning hostname "multinode-340918"
	I1205 19:59:44.706499  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:44.723536  100448 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:44.724006  100448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1205 19:59:44.724036  100448 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340918 && echo "multinode-340918" | sudo tee /etc/hostname
	I1205 19:59:44.724846  100448 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52764->127.0.0.1:32847: read: connection reset by peer
	I1205 19:59:47.866171  100448 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340918
	
	I1205 19:59:47.866249  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:47.882233  100448 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:47.882548  100448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1205 19:59:47.882566  100448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340918/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:59:48.012157  100448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:59:48.012185  100448 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 19:59:48.012225  100448 ubuntu.go:177] setting up certificates
	I1205 19:59:48.012236  100448 provision.go:83] configureAuth start
	I1205 19:59:48.012295  100448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918
	I1205 19:59:48.027840  100448 provision.go:138] copyHostCerts
	I1205 19:59:48.027876  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 19:59:48.027911  100448 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem, removing ...
	I1205 19:59:48.027923  100448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 19:59:48.027993  100448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 19:59:48.028077  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 19:59:48.028116  100448 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem, removing ...
	I1205 19:59:48.028126  100448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 19:59:48.028163  100448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 19:59:48.028242  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 19:59:48.028272  100448 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem, removing ...
	I1205 19:59:48.028281  100448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 19:59:48.028314  100448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 19:59:48.028374  100448 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.multinode-340918 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-340918]
	I1205 19:59:48.217565  100448 provision.go:172] copyRemoteCerts
	I1205 19:59:48.217644  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:59:48.217693  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:48.234587  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 19:59:48.328506  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:59:48.328580  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:59:48.349857  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:59:48.349928  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 19:59:48.371248  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:59:48.371303  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:59:48.392978  100448 provision.go:86] duration metric: configureAuth took 380.727979ms
	I1205 19:59:48.393005  100448 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:59:48.393196  100448 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:59:48.393329  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:48.409768  100448 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:48.410079  100448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1205 19:59:48.410096  100448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:59:48.624384  100448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:59:48.624410  100448 machine.go:91] provisioned docker machine in 3.91800363s
	I1205 19:59:48.624418  100448 client.go:171] LocalClient.Create took 10.492168317s
	I1205 19:59:48.624440  100448 start.go:167] duration metric: libmachine.API.Create for "multinode-340918" took 10.492252625s
	I1205 19:59:48.624450  100448 start.go:300] post-start starting for "multinode-340918" (driver="docker")
	I1205 19:59:48.624467  100448 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:59:48.624516  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:59:48.624558  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:48.640653  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 19:59:48.732564  100448 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:59:48.735420  100448 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1205 19:59:48.735440  100448 command_runner.go:130] > NAME="Ubuntu"
	I1205 19:59:48.735446  100448 command_runner.go:130] > VERSION_ID="22.04"
	I1205 19:59:48.735451  100448 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1205 19:59:48.735456  100448 command_runner.go:130] > VERSION_CODENAME=jammy
	I1205 19:59:48.735460  100448 command_runner.go:130] > ID=ubuntu
	I1205 19:59:48.735463  100448 command_runner.go:130] > ID_LIKE=debian
	I1205 19:59:48.735468  100448 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1205 19:59:48.735473  100448 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1205 19:59:48.735485  100448 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1205 19:59:48.735493  100448 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1205 19:59:48.735497  100448 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1205 19:59:48.735541  100448 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:59:48.735562  100448 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:59:48.735573  100448 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:59:48.735579  100448 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:59:48.735591  100448 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 19:59:48.735631  100448 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 19:59:48.735707  100448 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> 128832.pem in /etc/ssl/certs
	I1205 19:59:48.735718  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> /etc/ssl/certs/128832.pem
	I1205 19:59:48.735797  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:59:48.743779  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /etc/ssl/certs/128832.pem (1708 bytes)
	I1205 19:59:48.764495  100448 start.go:303] post-start completed in 140.028858ms
	I1205 19:59:48.764817  100448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918
	I1205 19:59:48.780998  100448 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/config.json ...
	I1205 19:59:48.781232  100448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:59:48.781267  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:48.796950  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 19:59:48.888576  100448 command_runner.go:130] > 24%!
	(MISSING)I1205 19:59:48.888796  100448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:59:48.892581  100448 command_runner.go:130] > 223G
	I1205 19:59:48.892777  100448 start.go:128] duration metric: createHost completed in 10.76312432s
	I1205 19:59:48.892795  100448 start.go:83] releasing machines lock for "multinode-340918", held for 10.76329708s
	I1205 19:59:48.892855  100448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918
	I1205 19:59:48.909135  100448 ssh_runner.go:195] Run: cat /version.json
	I1205 19:59:48.909179  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:48.909208  100448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:59:48.909267  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 19:59:48.925545  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 19:59:48.926782  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 19:59:49.019572  100448 command_runner.go:130] > {"iso_version": "v1.32.1-1701107474-17206", "kicbase_version": "v0.0.42-1701387262-17703", "minikube_version": "v1.32.0", "commit": "196015715c4eb12e436d5bb69e555ba604cda88e"}
	I1205 19:59:49.019715  100448 ssh_runner.go:195] Run: systemctl --version
	I1205 19:59:49.102682  100448 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 19:59:49.102737  100448 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1205 19:59:49.102778  100448 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 19:59:49.102854  100448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:59:49.239218  100448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:59:49.243225  100448 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1205 19:59:49.243252  100448 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1205 19:59:49.243260  100448 command_runner.go:130] > Device: 36h/54d	Inode: 539841      Links: 1
	I1205 19:59:49.243269  100448 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 19:59:49.243278  100448 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1205 19:59:49.243303  100448 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1205 19:59:49.243320  100448 command_runner.go:130] > Change: 2023-12-05 19:35:17.750849877 +0000
	I1205 19:59:49.243332  100448 command_runner.go:130] >  Birth: 2023-12-05 19:35:17.750849877 +0000
	I1205 19:59:49.243394  100448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:59:49.261307  100448 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:59:49.261382  100448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:59:49.287569  100448 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1205 19:59:49.287617  100448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:59:49.287627  100448 start.go:475] detecting cgroup driver to use...
	I1205 19:59:49.287666  100448 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:59:49.287711  100448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:59:49.301648  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:59:49.311459  100448 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:59:49.311524  100448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:59:49.323591  100448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:59:49.336267  100448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:59:49.413725  100448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:59:49.427018  100448 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 19:59:49.494230  100448 docker.go:219] disabling docker service ...
	I1205 19:59:49.494293  100448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:59:49.511038  100448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:59:49.521294  100448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:59:49.531613  100448 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 19:59:49.594255  100448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:59:49.674255  100448 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 19:59:49.674332  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:59:49.684308  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:59:49.698537  100448 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 19:59:49.698579  100448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:59:49.698615  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:59:49.707205  100448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:59:49.707263  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:59:49.716027  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:59:49.724443  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:59:49.732732  100448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:59:49.740706  100448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:59:49.747131  100448 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 19:59:49.747744  100448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:59:49.754657  100448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:59:49.833282  100448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:59:49.920821  100448 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:59:49.920904  100448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:59:49.924210  100448 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 19:59:49.924239  100448 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 19:59:49.924248  100448 command_runner.go:130] > Device: 41h/65d	Inode: 190         Links: 1
	I1205 19:59:49.924257  100448 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 19:59:49.924264  100448 command_runner.go:130] > Access: 2023-12-05 19:59:49.904465760 +0000
	I1205 19:59:49.924274  100448 command_runner.go:130] > Modify: 2023-12-05 19:59:49.904465760 +0000
	I1205 19:59:49.924287  100448 command_runner.go:130] > Change: 2023-12-05 19:59:49.904465760 +0000
	I1205 19:59:49.924297  100448 command_runner.go:130] >  Birth: -
	I1205 19:59:49.924346  100448 start.go:543] Will wait 60s for crictl version
	I1205 19:59:49.924390  100448 ssh_runner.go:195] Run: which crictl
	I1205 19:59:49.927345  100448 command_runner.go:130] > /usr/bin/crictl
	I1205 19:59:49.927419  100448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:59:49.955999  100448 command_runner.go:130] > Version:  0.1.0
	I1205 19:59:49.956019  100448 command_runner.go:130] > RuntimeName:  cri-o
	I1205 19:59:49.956023  100448 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1205 19:59:49.956028  100448 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 19:59:49.958097  100448 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:59:49.958165  100448 ssh_runner.go:195] Run: crio --version
	I1205 19:59:49.990085  100448 command_runner.go:130] > crio version 1.24.6
	I1205 19:59:49.990110  100448 command_runner.go:130] > Version:          1.24.6
	I1205 19:59:49.990121  100448 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 19:59:49.990129  100448 command_runner.go:130] > GitTreeState:     clean
	I1205 19:59:49.990140  100448 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 19:59:49.990147  100448 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 19:59:49.990153  100448 command_runner.go:130] > Compiler:         gc
	I1205 19:59:49.990161  100448 command_runner.go:130] > Platform:         linux/amd64
	I1205 19:59:49.990166  100448 command_runner.go:130] > Linkmode:         dynamic
	I1205 19:59:49.990179  100448 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 19:59:49.990187  100448 command_runner.go:130] > SeccompEnabled:   true
	I1205 19:59:49.990199  100448 command_runner.go:130] > AppArmorEnabled:  false
	I1205 19:59:49.991923  100448 ssh_runner.go:195] Run: crio --version
	I1205 19:59:50.024649  100448 command_runner.go:130] > crio version 1.24.6
	I1205 19:59:50.024679  100448 command_runner.go:130] > Version:          1.24.6
	I1205 19:59:50.024691  100448 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 19:59:50.024698  100448 command_runner.go:130] > GitTreeState:     clean
	I1205 19:59:50.024708  100448 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 19:59:50.024715  100448 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 19:59:50.024722  100448 command_runner.go:130] > Compiler:         gc
	I1205 19:59:50.024729  100448 command_runner.go:130] > Platform:         linux/amd64
	I1205 19:59:50.024738  100448 command_runner.go:130] > Linkmode:         dynamic
	I1205 19:59:50.024747  100448 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 19:59:50.024754  100448 command_runner.go:130] > SeccompEnabled:   true
	I1205 19:59:50.024759  100448 command_runner.go:130] > AppArmorEnabled:  false
	I1205 19:59:50.026961  100448 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 19:59:50.028305  100448 cli_runner.go:164] Run: docker network inspect multinode-340918 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:59:50.044084  100448 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1205 19:59:50.047601  100448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:59:50.057286  100448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:59:50.057345  100448 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:59:50.108559  100448 command_runner.go:130] > {
	I1205 19:59:50.108592  100448 command_runner.go:130] >   "images": [
	I1205 19:59:50.108599  100448 command_runner.go:130] >     {
	I1205 19:59:50.108612  100448 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1205 19:59:50.108629  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.108638  100448 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1205 19:59:50.108642  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108649  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.108667  100448 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1205 19:59:50.108682  100448 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1205 19:59:50.108691  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108702  100448 command_runner.go:130] >       "size": "65258016",
	I1205 19:59:50.108712  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.108721  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.108729  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.108733  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.108743  100448 command_runner.go:130] >     },
	I1205 19:59:50.108749  100448 command_runner.go:130] >     {
	I1205 19:59:50.108755  100448 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 19:59:50.108762  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.108768  100448 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 19:59:50.108777  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108786  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.108796  100448 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 19:59:50.108805  100448 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 19:59:50.108811  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108819  100448 command_runner.go:130] >       "size": "31470524",
	I1205 19:59:50.108825  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.108829  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.108833  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.108839  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.108843  100448 command_runner.go:130] >     },
	I1205 19:59:50.108849  100448 command_runner.go:130] >     {
	I1205 19:59:50.108855  100448 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1205 19:59:50.108861  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.108867  100448 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1205 19:59:50.108873  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108877  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.108887  100448 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1205 19:59:50.108896  100448 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1205 19:59:50.108904  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108911  100448 command_runner.go:130] >       "size": "53621675",
	I1205 19:59:50.108915  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.108922  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.108926  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.108930  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.108937  100448 command_runner.go:130] >     },
	I1205 19:59:50.108940  100448 command_runner.go:130] >     {
	I1205 19:59:50.108949  100448 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1205 19:59:50.108955  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.108960  100448 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1205 19:59:50.108966  100448 command_runner.go:130] >       ],
	I1205 19:59:50.108970  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.108977  100448 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1205 19:59:50.108985  100448 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1205 19:59:50.109000  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109007  100448 command_runner.go:130] >       "size": "295456551",
	I1205 19:59:50.109011  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.109019  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.109023  100448 command_runner.go:130] >       },
	I1205 19:59:50.109032  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.109038  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.109043  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.109048  100448 command_runner.go:130] >     },
	I1205 19:59:50.109052  100448 command_runner.go:130] >     {
	I1205 19:59:50.109060  100448 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1205 19:59:50.109066  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.109072  100448 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1205 19:59:50.109078  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109082  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.109091  100448 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1205 19:59:50.109100  100448 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1205 19:59:50.109107  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109111  100448 command_runner.go:130] >       "size": "127226832",
	I1205 19:59:50.109114  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.109118  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.109127  100448 command_runner.go:130] >       },
	I1205 19:59:50.109131  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.109135  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.109141  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.109145  100448 command_runner.go:130] >     },
	I1205 19:59:50.109151  100448 command_runner.go:130] >     {
	I1205 19:59:50.109157  100448 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1205 19:59:50.109163  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.109169  100448 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1205 19:59:50.109174  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109179  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.109189  100448 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1205 19:59:50.109198  100448 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1205 19:59:50.109204  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109208  100448 command_runner.go:130] >       "size": "123261750",
	I1205 19:59:50.109211  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.109218  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.109221  100448 command_runner.go:130] >       },
	I1205 19:59:50.109230  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.109236  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.109240  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.109246  100448 command_runner.go:130] >     },
	I1205 19:59:50.109250  100448 command_runner.go:130] >     {
	I1205 19:59:50.109262  100448 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1205 19:59:50.109270  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.109281  100448 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1205 19:59:50.109288  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109295  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.109308  100448 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1205 19:59:50.109321  100448 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1205 19:59:50.109331  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109338  100448 command_runner.go:130] >       "size": "74749335",
	I1205 19:59:50.109346  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.109353  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.109362  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.109371  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.109383  100448 command_runner.go:130] >     },
	I1205 19:59:50.109391  100448 command_runner.go:130] >     {
	I1205 19:59:50.109414  100448 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1205 19:59:50.109440  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.109448  100448 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1205 19:59:50.109456  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109465  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.109544  100448 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1205 19:59:50.109564  100448 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1205 19:59:50.109570  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109589  100448 command_runner.go:130] >       "size": "61551410",
	I1205 19:59:50.109597  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.109603  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.109612  100448 command_runner.go:130] >       },
	I1205 19:59:50.109621  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.109631  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.109640  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.109648  100448 command_runner.go:130] >     },
	I1205 19:59:50.109658  100448 command_runner.go:130] >     {
	I1205 19:59:50.109670  100448 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1205 19:59:50.109679  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.109687  100448 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 19:59:50.109696  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109702  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.109716  100448 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1205 19:59:50.109729  100448 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1205 19:59:50.109739  100448 command_runner.go:130] >       ],
	I1205 19:59:50.109747  100448 command_runner.go:130] >       "size": "750414",
	I1205 19:59:50.109757  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.109767  100448 command_runner.go:130] >         "value": "65535"
	I1205 19:59:50.109776  100448 command_runner.go:130] >       },
	I1205 19:59:50.109783  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.109789  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.109799  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.109805  100448 command_runner.go:130] >     }
	I1205 19:59:50.109813  100448 command_runner.go:130] >   ]
	I1205 19:59:50.109825  100448 command_runner.go:130] > }
	I1205 19:59:50.110560  100448 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:59:50.110580  100448 crio.go:415] Images already preloaded, skipping extraction
	I1205 19:59:50.110622  100448 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:59:50.141366  100448 command_runner.go:130] > {
	I1205 19:59:50.141384  100448 command_runner.go:130] >   "images": [
	I1205 19:59:50.141389  100448 command_runner.go:130] >     {
	I1205 19:59:50.141396  100448 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1205 19:59:50.141401  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.141406  100448 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1205 19:59:50.141410  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141414  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.141432  100448 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1205 19:59:50.141442  100448 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1205 19:59:50.141448  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141460  100448 command_runner.go:130] >       "size": "65258016",
	I1205 19:59:50.141467  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.141471  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.141479  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.141483  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.141489  100448 command_runner.go:130] >     },
	I1205 19:59:50.141493  100448 command_runner.go:130] >     {
	I1205 19:59:50.141499  100448 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 19:59:50.141503  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.141512  100448 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 19:59:50.141515  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141519  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.141527  100448 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 19:59:50.141535  100448 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 19:59:50.141538  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141547  100448 command_runner.go:130] >       "size": "31470524",
	I1205 19:59:50.141551  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.141556  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.141562  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.141569  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.141573  100448 command_runner.go:130] >     },
	I1205 19:59:50.141577  100448 command_runner.go:130] >     {
	I1205 19:59:50.141583  100448 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1205 19:59:50.141590  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.141595  100448 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1205 19:59:50.141602  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141606  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.141616  100448 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1205 19:59:50.141627  100448 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1205 19:59:50.141633  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141637  100448 command_runner.go:130] >       "size": "53621675",
	I1205 19:59:50.141643  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.141648  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.141654  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.141658  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.141664  100448 command_runner.go:130] >     },
	I1205 19:59:50.141670  100448 command_runner.go:130] >     {
	I1205 19:59:50.141679  100448 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1205 19:59:50.141683  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.141689  100448 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1205 19:59:50.141694  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141699  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.141708  100448 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1205 19:59:50.141717  100448 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1205 19:59:50.141728  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141735  100448 command_runner.go:130] >       "size": "295456551",
	I1205 19:59:50.141739  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.141746  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.141750  100448 command_runner.go:130] >       },
	I1205 19:59:50.141756  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.141760  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.141767  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.141770  100448 command_runner.go:130] >     },
	I1205 19:59:50.141777  100448 command_runner.go:130] >     {
	I1205 19:59:50.141785  100448 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1205 19:59:50.141791  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.141797  100448 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1205 19:59:50.141803  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141807  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.141820  100448 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1205 19:59:50.141830  100448 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1205 19:59:50.141835  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141840  100448 command_runner.go:130] >       "size": "127226832",
	I1205 19:59:50.141846  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.141850  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.141856  100448 command_runner.go:130] >       },
	I1205 19:59:50.141861  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.141871  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.141881  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.141888  100448 command_runner.go:130] >     },
	I1205 19:59:50.141892  100448 command_runner.go:130] >     {
	I1205 19:59:50.141900  100448 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1205 19:59:50.141909  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.141918  100448 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1205 19:59:50.141924  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141928  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.141938  100448 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1205 19:59:50.141948  100448 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1205 19:59:50.141954  100448 command_runner.go:130] >       ],
	I1205 19:59:50.141959  100448 command_runner.go:130] >       "size": "123261750",
	I1205 19:59:50.141965  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.141969  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.141975  100448 command_runner.go:130] >       },
	I1205 19:59:50.141980  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.141986  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.141990  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.141996  100448 command_runner.go:130] >     },
	I1205 19:59:50.142000  100448 command_runner.go:130] >     {
	I1205 19:59:50.142012  100448 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1205 19:59:50.142018  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.142025  100448 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1205 19:59:50.142031  100448 command_runner.go:130] >       ],
	I1205 19:59:50.142036  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.142045  100448 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1205 19:59:50.142054  100448 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1205 19:59:50.142060  100448 command_runner.go:130] >       ],
	I1205 19:59:50.142065  100448 command_runner.go:130] >       "size": "74749335",
	I1205 19:59:50.142071  100448 command_runner.go:130] >       "uid": null,
	I1205 19:59:50.142075  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.142079  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.142085  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.142089  100448 command_runner.go:130] >     },
	I1205 19:59:50.142100  100448 command_runner.go:130] >     {
	I1205 19:59:50.142109  100448 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1205 19:59:50.142114  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.142122  100448 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1205 19:59:50.142128  100448 command_runner.go:130] >       ],
	I1205 19:59:50.142132  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.142183  100448 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1205 19:59:50.142194  100448 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1205 19:59:50.142200  100448 command_runner.go:130] >       ],
	I1205 19:59:50.142205  100448 command_runner.go:130] >       "size": "61551410",
	I1205 19:59:50.142211  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.142215  100448 command_runner.go:130] >         "value": "0"
	I1205 19:59:50.142220  100448 command_runner.go:130] >       },
	I1205 19:59:50.142230  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.142238  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.142248  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.142256  100448 command_runner.go:130] >     },
	I1205 19:59:50.142264  100448 command_runner.go:130] >     {
	I1205 19:59:50.142276  100448 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1205 19:59:50.142286  100448 command_runner.go:130] >       "repoTags": [
	I1205 19:59:50.142297  100448 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 19:59:50.142308  100448 command_runner.go:130] >       ],
	I1205 19:59:50.142315  100448 command_runner.go:130] >       "repoDigests": [
	I1205 19:59:50.142325  100448 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1205 19:59:50.142340  100448 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1205 19:59:50.142346  100448 command_runner.go:130] >       ],
	I1205 19:59:50.142352  100448 command_runner.go:130] >       "size": "750414",
	I1205 19:59:50.142362  100448 command_runner.go:130] >       "uid": {
	I1205 19:59:50.142370  100448 command_runner.go:130] >         "value": "65535"
	I1205 19:59:50.142379  100448 command_runner.go:130] >       },
	I1205 19:59:50.142386  100448 command_runner.go:130] >       "username": "",
	I1205 19:59:50.142395  100448 command_runner.go:130] >       "spec": null,
	I1205 19:59:50.142402  100448 command_runner.go:130] >       "pinned": false
	I1205 19:59:50.142416  100448 command_runner.go:130] >     }
	I1205 19:59:50.142425  100448 command_runner.go:130] >   ]
	I1205 19:59:50.142429  100448 command_runner.go:130] > }
	I1205 19:59:50.143432  100448 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:59:50.143453  100448 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:59:50.143512  100448 ssh_runner.go:195] Run: crio config
	I1205 19:59:50.179318  100448 command_runner.go:130] ! time="2023-12-05 19:59:50.178848208Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1205 19:59:50.179345  100448 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 19:59:50.184515  100448 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 19:59:50.184544  100448 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 19:59:50.184551  100448 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 19:59:50.184554  100448 command_runner.go:130] > #
	I1205 19:59:50.184561  100448 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 19:59:50.184567  100448 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 19:59:50.184577  100448 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 19:59:50.184587  100448 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 19:59:50.184605  100448 command_runner.go:130] > # reload'.
	I1205 19:59:50.184627  100448 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 19:59:50.184637  100448 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 19:59:50.184647  100448 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 19:59:50.184655  100448 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 19:59:50.184659  100448 command_runner.go:130] > [crio]
	I1205 19:59:50.184672  100448 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 19:59:50.184684  100448 command_runner.go:130] > # containers images, in this directory.
	I1205 19:59:50.184706  100448 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 19:59:50.184722  100448 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 19:59:50.184731  100448 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1205 19:59:50.184749  100448 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 19:59:50.184758  100448 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 19:59:50.184769  100448 command_runner.go:130] > # storage_driver = "vfs"
	I1205 19:59:50.184782  100448 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 19:59:50.184796  100448 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 19:59:50.184806  100448 command_runner.go:130] > # storage_option = [
	I1205 19:59:50.184815  100448 command_runner.go:130] > # ]
	I1205 19:59:50.184829  100448 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 19:59:50.184842  100448 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 19:59:50.184850  100448 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 19:59:50.184862  100448 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 19:59:50.184875  100448 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 19:59:50.184887  100448 command_runner.go:130] > # always happen on a node reboot
	I1205 19:59:50.184898  100448 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 19:59:50.184911  100448 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 19:59:50.184923  100448 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 19:59:50.184941  100448 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 19:59:50.184950  100448 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 19:59:50.184969  100448 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 19:59:50.184986  100448 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 19:59:50.184997  100448 command_runner.go:130] > # internal_wipe = true
	I1205 19:59:50.185009  100448 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 19:59:50.185023  100448 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 19:59:50.185035  100448 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 19:59:50.185046  100448 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 19:59:50.185055  100448 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 19:59:50.185063  100448 command_runner.go:130] > [crio.api]
	I1205 19:59:50.185076  100448 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 19:59:50.185085  100448 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 19:59:50.185096  100448 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 19:59:50.185106  100448 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 19:59:50.185119  100448 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 19:59:50.185132  100448 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 19:59:50.185141  100448 command_runner.go:130] > # stream_port = "0"
	I1205 19:59:50.185149  100448 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 19:59:50.185159  100448 command_runner.go:130] > # stream_enable_tls = false
	I1205 19:59:50.185177  100448 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 19:59:50.185188  100448 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 19:59:50.185202  100448 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 19:59:50.185216  100448 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 19:59:50.185225  100448 command_runner.go:130] > # minutes.
	I1205 19:59:50.185235  100448 command_runner.go:130] > # stream_tls_cert = ""
	I1205 19:59:50.185244  100448 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 19:59:50.185257  100448 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 19:59:50.185268  100448 command_runner.go:130] > # stream_tls_key = ""
	I1205 19:59:50.185278  100448 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 19:59:50.185291  100448 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 19:59:50.185303  100448 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 19:59:50.185312  100448 command_runner.go:130] > # stream_tls_ca = ""
	I1205 19:59:50.185327  100448 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 19:59:50.185336  100448 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 19:59:50.185348  100448 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 19:59:50.185359  100448 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 19:59:50.185399  100448 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 19:59:50.185421  100448 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 19:59:50.185429  100448 command_runner.go:130] > [crio.runtime]
	I1205 19:59:50.185438  100448 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 19:59:50.185451  100448 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 19:59:50.185462  100448 command_runner.go:130] > # "nofile=1024:2048"
	I1205 19:59:50.185475  100448 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 19:59:50.185485  100448 command_runner.go:130] > # default_ulimits = [
	I1205 19:59:50.185494  100448 command_runner.go:130] > # ]
	I1205 19:59:50.185507  100448 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 19:59:50.185516  100448 command_runner.go:130] > # no_pivot = false
	I1205 19:59:50.185525  100448 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 19:59:50.185538  100448 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 19:59:50.185550  100448 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 19:59:50.185564  100448 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 19:59:50.185575  100448 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 19:59:50.185589  100448 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 19:59:50.185603  100448 command_runner.go:130] > # conmon = ""
	I1205 19:59:50.185614  100448 command_runner.go:130] > # Cgroup setting for conmon
	I1205 19:59:50.185628  100448 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 19:59:50.185639  100448 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 19:59:50.185653  100448 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 19:59:50.185665  100448 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 19:59:50.185679  100448 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 19:59:50.185689  100448 command_runner.go:130] > # conmon_env = [
	I1205 19:59:50.185697  100448 command_runner.go:130] > # ]
	I1205 19:59:50.185709  100448 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 19:59:50.185719  100448 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 19:59:50.185729  100448 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 19:59:50.185739  100448 command_runner.go:130] > # default_env = [
	I1205 19:59:50.185748  100448 command_runner.go:130] > # ]
	I1205 19:59:50.185758  100448 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 19:59:50.185769  100448 command_runner.go:130] > # selinux = false
	I1205 19:59:50.185782  100448 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 19:59:50.185795  100448 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 19:59:50.185808  100448 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 19:59:50.185817  100448 command_runner.go:130] > # seccomp_profile = ""
	I1205 19:59:50.185828  100448 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 19:59:50.185842  100448 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 19:59:50.185856  100448 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 19:59:50.185867  100448 command_runner.go:130] > # which might increase security.
	I1205 19:59:50.185878  100448 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1205 19:59:50.185891  100448 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 19:59:50.185905  100448 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 19:59:50.185920  100448 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 19:59:50.185935  100448 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 19:59:50.185947  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 19:59:50.185959  100448 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 19:59:50.185971  100448 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 19:59:50.185982  100448 command_runner.go:130] > # the cgroup blockio controller.
	I1205 19:59:50.185992  100448 command_runner.go:130] > # blockio_config_file = ""
	I1205 19:59:50.186006  100448 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 19:59:50.186013  100448 command_runner.go:130] > # irqbalance daemon.
	I1205 19:59:50.186019  100448 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 19:59:50.186033  100448 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 19:59:50.186049  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 19:59:50.186060  100448 command_runner.go:130] > # rdt_config_file = ""
	I1205 19:59:50.186076  100448 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 19:59:50.186086  100448 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 19:59:50.186099  100448 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 19:59:50.186109  100448 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 19:59:50.186118  100448 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 19:59:50.186131  100448 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 19:59:50.186141  100448 command_runner.go:130] > # will be added.
	I1205 19:59:50.186149  100448 command_runner.go:130] > # default_capabilities = [
	I1205 19:59:50.186159  100448 command_runner.go:130] > # 	"CHOWN",
	I1205 19:59:50.186169  100448 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 19:59:50.186178  100448 command_runner.go:130] > # 	"FSETID",
	I1205 19:59:50.186188  100448 command_runner.go:130] > # 	"FOWNER",
	I1205 19:59:50.186198  100448 command_runner.go:130] > # 	"SETGID",
	I1205 19:59:50.186207  100448 command_runner.go:130] > # 	"SETUID",
	I1205 19:59:50.186213  100448 command_runner.go:130] > # 	"SETPCAP",
	I1205 19:59:50.186218  100448 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 19:59:50.186230  100448 command_runner.go:130] > # 	"KILL",
	I1205 19:59:50.186240  100448 command_runner.go:130] > # ]
	I1205 19:59:50.186253  100448 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 19:59:50.186267  100448 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 19:59:50.186278  100448 command_runner.go:130] > # add_inheritable_capabilities = true
	I1205 19:59:50.186291  100448 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 19:59:50.186303  100448 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 19:59:50.186311  100448 command_runner.go:130] > # default_sysctls = [
	I1205 19:59:50.186315  100448 command_runner.go:130] > # ]
	I1205 19:59:50.186326  100448 command_runner.go:130] > # List of devices on the host that a
	I1205 19:59:50.186340  100448 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 19:59:50.186351  100448 command_runner.go:130] > # allowed_devices = [
	I1205 19:59:50.186361  100448 command_runner.go:130] > # 	"/dev/fuse",
	I1205 19:59:50.186370  100448 command_runner.go:130] > # ]
	I1205 19:59:50.186381  100448 command_runner.go:130] > # List of additional devices. specified as
	I1205 19:59:50.186431  100448 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 19:59:50.186447  100448 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 19:59:50.186457  100448 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 19:59:50.186471  100448 command_runner.go:130] > # additional_devices = [
	I1205 19:59:50.186480  100448 command_runner.go:130] > # ]
	I1205 19:59:50.186492  100448 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 19:59:50.186502  100448 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 19:59:50.186511  100448 command_runner.go:130] > # 	"/etc/cdi",
	I1205 19:59:50.186520  100448 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 19:59:50.186526  100448 command_runner.go:130] > # ]
	I1205 19:59:50.186536  100448 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 19:59:50.186550  100448 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 19:59:50.186560  100448 command_runner.go:130] > # Defaults to false.
	I1205 19:59:50.186572  100448 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 19:59:50.186586  100448 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 19:59:50.186602  100448 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 19:59:50.186609  100448 command_runner.go:130] > # hooks_dir = [
	I1205 19:59:50.186620  100448 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 19:59:50.186629  100448 command_runner.go:130] > # ]
	I1205 19:59:50.186643  100448 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 19:59:50.186657  100448 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 19:59:50.186672  100448 command_runner.go:130] > # its default mounts from the following two files:
	I1205 19:59:50.186681  100448 command_runner.go:130] > #
	I1205 19:59:50.186692  100448 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 19:59:50.186703  100448 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 19:59:50.186715  100448 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 19:59:50.186724  100448 command_runner.go:130] > #
	I1205 19:59:50.186735  100448 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 19:59:50.186749  100448 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 19:59:50.186762  100448 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 19:59:50.186774  100448 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 19:59:50.186783  100448 command_runner.go:130] > #
	I1205 19:59:50.186792  100448 command_runner.go:130] > # default_mounts_file = ""
	I1205 19:59:50.186800  100448 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 19:59:50.186814  100448 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 19:59:50.186825  100448 command_runner.go:130] > # pids_limit = 0
	I1205 19:59:50.186836  100448 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 19:59:50.186849  100448 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 19:59:50.186862  100448 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 19:59:50.186883  100448 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 19:59:50.186892  100448 command_runner.go:130] > # log_size_max = -1
	I1205 19:59:50.186903  100448 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 19:59:50.186913  100448 command_runner.go:130] > # log_to_journald = false
	I1205 19:59:50.186927  100448 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 19:59:50.186938  100448 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 19:59:50.186951  100448 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 19:59:50.186963  100448 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 19:59:50.186975  100448 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 19:59:50.186985  100448 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 19:59:50.186994  100448 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 19:59:50.187003  100448 command_runner.go:130] > # read_only = false
	I1205 19:59:50.187016  100448 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 19:59:50.187030  100448 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 19:59:50.187041  100448 command_runner.go:130] > # live configuration reload.
	I1205 19:59:50.187051  100448 command_runner.go:130] > # log_level = "info"
	I1205 19:59:50.187064  100448 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 19:59:50.187075  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 19:59:50.187085  100448 command_runner.go:130] > # log_filter = ""
	I1205 19:59:50.187096  100448 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 19:59:50.187106  100448 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 19:59:50.187116  100448 command_runner.go:130] > # separated by comma.
	I1205 19:59:50.187127  100448 command_runner.go:130] > # uid_mappings = ""
	I1205 19:59:50.187137  100448 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 19:59:50.187150  100448 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 19:59:50.187160  100448 command_runner.go:130] > # separated by comma.
	I1205 19:59:50.187171  100448 command_runner.go:130] > # gid_mappings = ""
	I1205 19:59:50.187184  100448 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 19:59:50.187195  100448 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 19:59:50.187204  100448 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 19:59:50.187214  100448 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 19:59:50.187229  100448 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 19:59:50.187244  100448 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 19:59:50.187257  100448 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 19:59:50.187271  100448 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 19:59:50.187284  100448 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 19:59:50.187297  100448 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 19:59:50.187309  100448 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 19:59:50.187320  100448 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 19:59:50.187334  100448 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 19:59:50.187350  100448 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 19:59:50.187361  100448 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 19:59:50.187373  100448 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 19:59:50.187381  100448 command_runner.go:130] > # drop_infra_ctr = true
	I1205 19:59:50.187391  100448 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 19:59:50.187412  100448 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 19:59:50.187428  100448 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 19:59:50.187439  100448 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 19:59:50.187452  100448 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 19:59:50.187464  100448 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 19:59:50.187474  100448 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 19:59:50.187489  100448 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 19:59:50.187495  100448 command_runner.go:130] > # pinns_path = ""
	I1205 19:59:50.187504  100448 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 19:59:50.187521  100448 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 19:59:50.187536  100448 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 19:59:50.187546  100448 command_runner.go:130] > # default_runtime = "runc"
	I1205 19:59:50.187560  100448 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 19:59:50.187575  100448 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 19:59:50.187588  100448 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 19:59:50.187605  100448 command_runner.go:130] > # creation as a file is not desired either.
	I1205 19:59:50.187622  100448 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 19:59:50.187633  100448 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 19:59:50.187645  100448 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 19:59:50.187654  100448 command_runner.go:130] > # ]
	I1205 19:59:50.187667  100448 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 19:59:50.187676  100448 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 19:59:50.187689  100448 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 19:59:50.187703  100448 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 19:59:50.187713  100448 command_runner.go:130] > #
	I1205 19:59:50.187724  100448 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 19:59:50.187736  100448 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 19:59:50.187750  100448 command_runner.go:130] > #  runtime_type = "oci"
	I1205 19:59:50.187762  100448 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 19:59:50.187770  100448 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 19:59:50.187778  100448 command_runner.go:130] > #  allowed_annotations = []
	I1205 19:59:50.187787  100448 command_runner.go:130] > # Where:
	I1205 19:59:50.187800  100448 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 19:59:50.187814  100448 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 19:59:50.187828  100448 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 19:59:50.187841  100448 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 19:59:50.187851  100448 command_runner.go:130] > #   in $PATH.
	I1205 19:59:50.187866  100448 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 19:59:50.187875  100448 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 19:59:50.187888  100448 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 19:59:50.187898  100448 command_runner.go:130] > #   state.
	I1205 19:59:50.187909  100448 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 19:59:50.187950  100448 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 19:59:50.187967  100448 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 19:59:50.187976  100448 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 19:59:50.187996  100448 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 19:59:50.188011  100448 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 19:59:50.188023  100448 command_runner.go:130] > #   The currently recognized values are:
	I1205 19:59:50.188037  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 19:59:50.188052  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 19:59:50.188065  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 19:59:50.188075  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 19:59:50.188085  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 19:59:50.188099  100448 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 19:59:50.188112  100448 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 19:59:50.188127  100448 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 19:59:50.188139  100448 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 19:59:50.188149  100448 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 19:59:50.188160  100448 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1205 19:59:50.188170  100448 command_runner.go:130] > runtime_type = "oci"
	I1205 19:59:50.188179  100448 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 19:59:50.188187  100448 command_runner.go:130] > runtime_config_path = ""
	I1205 19:59:50.188210  100448 command_runner.go:130] > monitor_path = ""
	I1205 19:59:50.188227  100448 command_runner.go:130] > monitor_cgroup = ""
	I1205 19:59:50.188235  100448 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 19:59:50.188302  100448 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 19:59:50.188312  100448 command_runner.go:130] > # running containers
	I1205 19:59:50.188324  100448 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 19:59:50.188335  100448 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 19:59:50.188349  100448 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 19:59:50.188362  100448 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 19:59:50.188373  100448 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 19:59:50.188384  100448 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 19:59:50.188392  100448 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 19:59:50.188398  100448 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 19:59:50.188409  100448 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 19:59:50.188421  100448 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 19:59:50.188433  100448 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 19:59:50.188445  100448 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 19:59:50.188458  100448 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 19:59:50.188474  100448 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 19:59:50.188491  100448 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 19:59:50.188503  100448 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 19:59:50.188521  100448 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 19:59:50.188538  100448 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 19:59:50.188550  100448 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 19:59:50.188565  100448 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 19:59:50.188574  100448 command_runner.go:130] > # Example:
	I1205 19:59:50.188583  100448 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 19:59:50.188591  100448 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 19:59:50.188607  100448 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 19:59:50.188619  100448 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 19:59:50.188629  100448 command_runner.go:130] > # cpuset = 0
	I1205 19:59:50.188639  100448 command_runner.go:130] > # cpushares = "0-1"
	I1205 19:59:50.188648  100448 command_runner.go:130] > # Where:
	I1205 19:59:50.188659  100448 command_runner.go:130] > # The workload name is workload-type.
	I1205 19:59:50.188674  100448 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 19:59:50.188684  100448 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 19:59:50.188693  100448 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 19:59:50.188716  100448 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 19:59:50.188729  100448 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 19:59:50.188739  100448 command_runner.go:130] > # 
	I1205 19:59:50.188753  100448 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 19:59:50.188762  100448 command_runner.go:130] > #
	I1205 19:59:50.188775  100448 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 19:59:50.188788  100448 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 19:59:50.188798  100448 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 19:59:50.188810  100448 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 19:59:50.188823  100448 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 19:59:50.188834  100448 command_runner.go:130] > [crio.image]
	I1205 19:59:50.188847  100448 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 19:59:50.188857  100448 command_runner.go:130] > # default_transport = "docker://"
	I1205 19:59:50.188870  100448 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 19:59:50.188884  100448 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 19:59:50.188892  100448 command_runner.go:130] > # global_auth_file = ""
	I1205 19:59:50.188898  100448 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 19:59:50.188909  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 19:59:50.188924  100448 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 19:59:50.188942  100448 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 19:59:50.188956  100448 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 19:59:50.188968  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 19:59:50.188979  100448 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 19:59:50.188990  100448 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 19:59:50.189000  100448 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 19:59:50.189014  100448 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 19:59:50.189027  100448 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 19:59:50.189038  100448 command_runner.go:130] > # pause_command = "/pause"
	I1205 19:59:50.189052  100448 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 19:59:50.189069  100448 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 19:59:50.189085  100448 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 19:59:50.189096  100448 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 19:59:50.189106  100448 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 19:59:50.189117  100448 command_runner.go:130] > # signature_policy = ""
	I1205 19:59:50.189138  100448 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 19:59:50.189151  100448 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 19:59:50.189167  100448 command_runner.go:130] > # changing them here.
	I1205 19:59:50.189176  100448 command_runner.go:130] > # insecure_registries = [
	I1205 19:59:50.189182  100448 command_runner.go:130] > # ]
	I1205 19:59:50.189193  100448 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 19:59:50.189227  100448 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 19:59:50.189241  100448 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 19:59:50.189253  100448 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 19:59:50.189263  100448 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 19:59:50.189274  100448 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 19:59:50.189281  100448 command_runner.go:130] > # CNI plugins.
	I1205 19:59:50.189288  100448 command_runner.go:130] > [crio.network]
	I1205 19:59:50.189301  100448 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 19:59:50.189314  100448 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 19:59:50.189325  100448 command_runner.go:130] > # cni_default_network = ""
	I1205 19:59:50.189338  100448 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 19:59:50.189348  100448 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 19:59:50.189361  100448 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 19:59:50.189369  100448 command_runner.go:130] > # plugin_dirs = [
	I1205 19:59:50.189381  100448 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 19:59:50.189390  100448 command_runner.go:130] > # ]
	I1205 19:59:50.189404  100448 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 19:59:50.189413  100448 command_runner.go:130] > [crio.metrics]
	I1205 19:59:50.189425  100448 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 19:59:50.189435  100448 command_runner.go:130] > # enable_metrics = false
	I1205 19:59:50.189447  100448 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 19:59:50.189455  100448 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 19:59:50.189464  100448 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 19:59:50.189478  100448 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 19:59:50.189491  100448 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 19:59:50.189502  100448 command_runner.go:130] > # metrics_collectors = [
	I1205 19:59:50.189511  100448 command_runner.go:130] > # 	"operations",
	I1205 19:59:50.189522  100448 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 19:59:50.189534  100448 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 19:59:50.189544  100448 command_runner.go:130] > # 	"operations_errors",
	I1205 19:59:50.189554  100448 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 19:59:50.189562  100448 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 19:59:50.189574  100448 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 19:59:50.189586  100448 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 19:59:50.189602  100448 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 19:59:50.189612  100448 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 19:59:50.189622  100448 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 19:59:50.189633  100448 command_runner.go:130] > # 	"containers_oom_total",
	I1205 19:59:50.189642  100448 command_runner.go:130] > # 	"containers_oom",
	I1205 19:59:50.189652  100448 command_runner.go:130] > # 	"processes_defunct",
	I1205 19:59:50.189658  100448 command_runner.go:130] > # 	"operations_total",
	I1205 19:59:50.189665  100448 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 19:59:50.189677  100448 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 19:59:50.189688  100448 command_runner.go:130] > # 	"operations_errors_total",
	I1205 19:59:50.189698  100448 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 19:59:50.189709  100448 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 19:59:50.189719  100448 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 19:59:50.189730  100448 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 19:59:50.189739  100448 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 19:59:50.189746  100448 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 19:59:50.189761  100448 command_runner.go:130] > # ]
	I1205 19:59:50.189774  100448 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 19:59:50.189785  100448 command_runner.go:130] > # metrics_port = 9090
	I1205 19:59:50.189797  100448 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 19:59:50.189807  100448 command_runner.go:130] > # metrics_socket = ""
	I1205 19:59:50.189818  100448 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 19:59:50.189831  100448 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 19:59:50.189842  100448 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 19:59:50.189851  100448 command_runner.go:130] > # certificate on any modification event.
	I1205 19:59:50.189861  100448 command_runner.go:130] > # metrics_cert = ""
	I1205 19:59:50.189873  100448 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 19:59:50.189885  100448 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 19:59:50.189895  100448 command_runner.go:130] > # metrics_key = ""
	I1205 19:59:50.189908  100448 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 19:59:50.189929  100448 command_runner.go:130] > [crio.tracing]
	I1205 19:59:50.189940  100448 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 19:59:50.189944  100448 command_runner.go:130] > # enable_tracing = false
	I1205 19:59:50.189953  100448 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 19:59:50.189968  100448 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 19:59:50.189979  100448 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 19:59:50.189991  100448 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 19:59:50.190004  100448 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 19:59:50.190014  100448 command_runner.go:130] > [crio.stats]
	I1205 19:59:50.190026  100448 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 19:59:50.190036  100448 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 19:59:50.190043  100448 command_runner.go:130] > # stats_collection_period = 0
	I1205 19:59:50.190153  100448 cni.go:84] Creating CNI manager for ""
	I1205 19:59:50.190170  100448 cni.go:136] 1 nodes found, recommending kindnet
	I1205 19:59:50.190189  100448 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:59:50.190217  100448 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340918 NodeName:multinode-340918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:59:50.190377  100448 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340918"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:59:50.190445  100448 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-340918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:59:50.190513  100448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:59:50.199482  100448 command_runner.go:130] > kubeadm
	I1205 19:59:50.199510  100448 command_runner.go:130] > kubectl
	I1205 19:59:50.199517  100448 command_runner.go:130] > kubelet
	I1205 19:59:50.199540  100448 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:59:50.199589  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:59:50.207260  100448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1205 19:59:50.222354  100448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:59:50.238024  100448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1205 19:59:50.253501  100448 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:59:50.256583  100448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:59:50.265978  100448 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918 for IP: 192.168.58.2
	I1205 19:59:50.266009  100448 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fbd7b27250f9a01d87d327232e4afd0539a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.266146  100448 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key
	I1205 19:59:50.266210  100448 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key
	I1205 19:59:50.266272  100448 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key
	I1205 19:59:50.266295  100448 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt with IP's: []
	I1205 19:59:50.328386  100448 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt ...
	I1205 19:59:50.328419  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt: {Name:mkf2aa88219549747265f95048ca408c7e10c217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.328609  100448 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key ...
	I1205 19:59:50.328630  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key: {Name:mkaaa90b1e2da65972c1a1565820a46570daca0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.328727  100448 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key.cee25041
	I1205 19:59:50.328744  100448 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:59:50.423455  100448 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt.cee25041 ...
	I1205 19:59:50.423490  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt.cee25041: {Name:mk17b535cc53aebbf01ec8594fc2748eb5e225b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.423672  100448 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key.cee25041 ...
	I1205 19:59:50.423690  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key.cee25041: {Name:mk3a333288f62391d6fd7319a5b7671dc5be37b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.423792  100448 certs.go:337] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt
	I1205 19:59:50.423895  100448 certs.go:341] copying /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key
	I1205 19:59:50.423982  100448 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.key
	I1205 19:59:50.424006  100448 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.crt with IP's: []
	I1205 19:59:50.566820  100448 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.crt ...
	I1205 19:59:50.566857  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.crt: {Name:mk775afcd57f72032008d9a4cfd360d53f24d86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.567032  100448 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.key ...
	I1205 19:59:50.567051  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.key: {Name:mkd29b6b4bfe6a45cdd87cb35d5f7feab1d073a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:59:50.567147  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:59:50.567173  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:59:50.567196  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:59:50.567216  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:59:50.567232  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:59:50.567252  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:59:50.567271  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:59:50.567292  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:59:50.567355  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem (1338 bytes)
	W1205 19:59:50.567408  100448 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883_empty.pem, impossibly tiny 0 bytes
	I1205 19:59:50.567426  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:59:50.567461  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:59:50.567497  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:59:50.567533  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem (1679 bytes)
	I1205 19:59:50.567588  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem (1708 bytes)
	I1205 19:59:50.567627  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:59:50.567647  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem -> /usr/share/ca-certificates/12883.pem
	I1205 19:59:50.567666  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> /usr/share/ca-certificates/128832.pem
	I1205 19:59:50.568281  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:59:50.589033  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:59:50.609444  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:59:50.630196  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:59:50.650842  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:59:50.672110  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:59:50.692890  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:59:50.714023  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:59:50.735572  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:59:50.757837  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem --> /usr/share/ca-certificates/12883.pem (1338 bytes)
	I1205 19:59:50.780042  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /usr/share/ca-certificates/128832.pem (1708 bytes)
	I1205 19:59:50.801905  100448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:59:50.817625  100448 ssh_runner.go:195] Run: openssl version
	I1205 19:59:50.822423  100448 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1205 19:59:50.822552  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:59:50.830846  100448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:59:50.833948  100448 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:59:50.833978  100448 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:59:50.834024  100448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:59:50.839907  100448 command_runner.go:130] > b5213941
	I1205 19:59:50.840120  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:59:50.848430  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12883.pem && ln -fs /usr/share/ca-certificates/12883.pem /etc/ssl/certs/12883.pem"
	I1205 19:59:50.857131  100448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12883.pem
	I1205 19:59:50.860340  100448 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:46 /usr/share/ca-certificates/12883.pem
	I1205 19:59:50.860372  100448 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:46 /usr/share/ca-certificates/12883.pem
	I1205 19:59:50.860409  100448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12883.pem
	I1205 19:59:50.866609  100448 command_runner.go:130] > 51391683
	I1205 19:59:50.866690  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12883.pem /etc/ssl/certs/51391683.0"
	I1205 19:59:50.875707  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128832.pem && ln -fs /usr/share/ca-certificates/128832.pem /etc/ssl/certs/128832.pem"
	I1205 19:59:50.884559  100448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128832.pem
	I1205 19:59:50.887771  100448 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:46 /usr/share/ca-certificates/128832.pem
	I1205 19:59:50.887807  100448 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:46 /usr/share/ca-certificates/128832.pem
	I1205 19:59:50.887840  100448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128832.pem
	I1205 19:59:50.893868  100448 command_runner.go:130] > 3ec20f2e
	I1205 19:59:50.893932  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128832.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:59:50.904507  100448 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:59:50.907416  100448 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:59:50.907458  100448 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:59:50.907511  100448 kubeadm.go:404] StartCluster: {Name:multinode-340918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:59:50.907583  100448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:59:50.907636  100448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:59:50.939677  100448 cri.go:89] found id: ""
	I1205 19:59:50.939746  100448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:59:50.947178  100448 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1205 19:59:50.947205  100448 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1205 19:59:50.947215  100448 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1205 19:59:50.947804  100448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:59:50.955752  100448 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:59:50.955796  100448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:59:50.963808  100448 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1205 19:59:50.963831  100448 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1205 19:59:50.963839  100448 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1205 19:59:50.963846  100448 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:59:50.963879  100448 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:59:50.963917  100448 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:59:51.008839  100448 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:59:51.008902  100448 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1205 19:59:51.008960  100448 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:59:51.008978  100448 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 19:59:51.042728  100448 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:59:51.042752  100448 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:59:51.042792  100448 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1205 19:59:51.042799  100448 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1205 19:59:51.042837  100448 kubeadm.go:322] OS: Linux
	I1205 19:59:51.042844  100448 command_runner.go:130] > OS: Linux
	I1205 19:59:51.042878  100448 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:59:51.042885  100448 command_runner.go:130] > CGROUPS_CPU: enabled
	I1205 19:59:51.042957  100448 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:59:51.042991  100448 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1205 19:59:51.043052  100448 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:59:51.043066  100448 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1205 19:59:51.043121  100448 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:59:51.043133  100448 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1205 19:59:51.043192  100448 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:59:51.043230  100448 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1205 19:59:51.043277  100448 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:59:51.043284  100448 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1205 19:59:51.043316  100448 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1205 19:59:51.043339  100448 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1205 19:59:51.043412  100448 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1205 19:59:51.043426  100448 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1205 19:59:51.043499  100448 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1205 19:59:51.043508  100448 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1205 19:59:51.105806  100448 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:59:51.105834  100448 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:59:51.105917  100448 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:59:51.105925  100448 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:59:51.105991  100448 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:59:51.105998  100448 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:59:51.297615  100448 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:59:51.301509  100448 out.go:204]   - Generating certificates and keys ...
	I1205 19:59:51.297662  100448 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:59:51.301671  100448 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:59:51.301691  100448 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1205 19:59:51.301764  100448 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:59:51.301772  100448 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1205 19:59:51.420455  100448 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:59:51.420480  100448 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:59:51.505921  100448 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:59:51.505957  100448 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:59:51.656562  100448 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:59:51.656587  100448 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1205 19:59:51.815118  100448 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:59:51.815146  100448 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1205 19:59:51.950494  100448 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:59:51.950527  100448 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1205 19:59:51.950672  100448 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-340918] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 19:59:51.950698  100448 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-340918] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 19:59:52.096956  100448 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:59:52.096986  100448 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1205 19:59:52.097134  100448 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-340918] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 19:59:52.097145  100448 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-340918] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 19:59:52.211478  100448 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:59:52.211507  100448 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:59:52.324309  100448 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:59:52.324338  100448 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:59:52.383506  100448 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:59:52.383538  100448 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1205 19:59:52.383620  100448 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:59:52.383633  100448 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:59:52.454152  100448 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:59:52.454178  100448 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:59:52.633263  100448 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:59:52.633310  100448 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:59:52.752557  100448 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:59:52.752582  100448 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:59:52.861595  100448 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:59:52.861625  100448 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:59:52.862005  100448 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:59:52.862024  100448 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:59:52.864304  100448 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:59:52.866579  100448 out.go:204]   - Booting up control plane ...
	I1205 19:59:52.864349  100448 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:59:52.866681  100448 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:59:52.866706  100448 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:59:52.866845  100448 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:59:52.866860  100448 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:59:52.866916  100448 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:59:52.866923  100448 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:59:52.874509  100448 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:59:52.874525  100448 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:59:52.875233  100448 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:59:52.875249  100448 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:59:52.875306  100448 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:59:52.875325  100448 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 19:59:52.954938  100448 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:59:52.954965  100448 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:59:57.957585  100448 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002640 seconds
	I1205 19:59:57.957614  100448 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002640 seconds
	I1205 19:59:57.957758  100448 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:59:57.957787  100448 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:59:57.971228  100448 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:59:57.971254  100448 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:59:58.491212  100448 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:59:58.491235  100448 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:59:58.491399  100448 kubeadm.go:322] [mark-control-plane] Marking the node multinode-340918 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:59:58.491416  100448 command_runner.go:130] > [mark-control-plane] Marking the node multinode-340918 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:59:59.001179  100448 kubeadm.go:322] [bootstrap-token] Using token: yn5v27.6q0e8zbwnmpjuw6q
	I1205 19:59:59.001214  100448 command_runner.go:130] > [bootstrap-token] Using token: yn5v27.6q0e8zbwnmpjuw6q
	I1205 19:59:59.002878  100448 out.go:204]   - Configuring RBAC rules ...
	I1205 19:59:59.003013  100448 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:59:59.003031  100448 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:59:59.006762  100448 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:59:59.006782  100448 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:59:59.012279  100448 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:59:59.012297  100448 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:59:59.015517  100448 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:59:59.015538  100448 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:59:59.017877  100448 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:59:59.017894  100448 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:59:59.020124  100448 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:59:59.020142  100448 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:59:59.029291  100448 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:59:59.029308  100448 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:59:59.233059  100448 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:59:59.233085  100448 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1205 19:59:59.429055  100448 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:59:59.429083  100448 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1205 19:59:59.430040  100448 kubeadm.go:322] 
	I1205 19:59:59.430107  100448 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:59:59.430125  100448 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1205 19:59:59.430131  100448 kubeadm.go:322] 
	I1205 19:59:59.430279  100448 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:59:59.430319  100448 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1205 19:59:59.430347  100448 kubeadm.go:322] 
	I1205 19:59:59.430383  100448 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:59:59.430393  100448 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1205 19:59:59.430457  100448 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:59:59.430469  100448 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:59:59.430513  100448 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:59:59.430520  100448 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:59:59.430523  100448 kubeadm.go:322] 
	I1205 19:59:59.430615  100448 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:59:59.430625  100448 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1205 19:59:59.430631  100448 kubeadm.go:322] 
	I1205 19:59:59.430690  100448 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:59:59.430701  100448 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:59:59.430706  100448 kubeadm.go:322] 
	I1205 19:59:59.430769  100448 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:59:59.430782  100448 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1205 19:59:59.430889  100448 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:59:59.430907  100448 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:59:59.431000  100448 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:59:59.431011  100448 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:59:59.431017  100448 kubeadm.go:322] 
	I1205 19:59:59.431127  100448 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:59:59.431137  100448 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:59:59.431241  100448 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:59:59.431258  100448 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1205 19:59:59.431279  100448 kubeadm.go:322] 
	I1205 19:59:59.431405  100448 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yn5v27.6q0e8zbwnmpjuw6q \
	I1205 19:59:59.431424  100448 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token yn5v27.6q0e8zbwnmpjuw6q \
	I1205 19:59:59.431534  100448 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de \
	I1205 19:59:59.431550  100448 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de \
	I1205 19:59:59.431594  100448 kubeadm.go:322] 	--control-plane 
	I1205 19:59:59.431604  100448 command_runner.go:130] > 	--control-plane 
	I1205 19:59:59.431610  100448 kubeadm.go:322] 
	I1205 19:59:59.431795  100448 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:59:59.431817  100448 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:59:59.431843  100448 kubeadm.go:322] 
	I1205 19:59:59.431944  100448 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yn5v27.6q0e8zbwnmpjuw6q \
	I1205 19:59:59.431952  100448 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yn5v27.6q0e8zbwnmpjuw6q \
	I1205 19:59:59.432049  100448 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de 
	I1205 19:59:59.432052  100448 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de 
	I1205 19:59:59.435198  100448 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1205 19:59:59.435218  100448 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1205 19:59:59.435360  100448 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:59:59.435376  100448 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:59:59.435393  100448 cni.go:84] Creating CNI manager for ""
	I1205 19:59:59.435408  100448 cni.go:136] 1 nodes found, recommending kindnet
	I1205 19:59:59.437034  100448 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:59:59.438592  100448 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:59:59.442425  100448 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 19:59:59.442449  100448 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1205 19:59:59.442459  100448 command_runner.go:130] > Device: 36h/54d	Inode: 547389      Links: 1
	I1205 19:59:59.442470  100448 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 19:59:59.442479  100448 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1205 19:59:59.442498  100448 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1205 19:59:59.442510  100448 command_runner.go:130] > Change: 2023-12-05 19:35:18.154877769 +0000
	I1205 19:59:59.442521  100448 command_runner.go:130] >  Birth: 2023-12-05 19:35:18.130876112 +0000
	I1205 19:59:59.442580  100448 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 19:59:59.442602  100448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:59:59.459772  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:00:00.124533  100448 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1205 20:00:00.129799  100448 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1205 20:00:00.138398  100448 command_runner.go:130] > serviceaccount/kindnet created
	I1205 20:00:00.148760  100448 command_runner.go:130] > daemonset.apps/kindnet created
	I1205 20:00:00.152885  100448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:00:00.152983  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:00.152996  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-340918 minikube.k8s.io/updated_at=2023_12_05T20_00_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:00.160938  100448 command_runner.go:130] > -16
	I1205 20:00:00.238356  100448 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1205 20:00:00.241864  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:00.250689  100448 ops.go:34] apiserver oom_adj: -16
	I1205 20:00:00.250718  100448 command_runner.go:130] > node/multinode-340918 labeled
	I1205 20:00:00.307569  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:00.307680  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:00.379300  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:00.882946  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:00.946648  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:01.382751  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:01.445931  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:01.882473  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:01.944172  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:02.382422  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:02.442919  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:02.883134  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:02.944718  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:03.383015  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:03.444854  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:03.882851  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:03.946261  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:04.382718  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:04.446438  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:04.883075  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:04.945499  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:05.382404  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:05.444337  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:05.882353  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:05.943454  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:06.382958  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:06.443002  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:06.883232  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:06.947936  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:07.382497  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:07.448094  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:07.882678  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:07.952013  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:08.383069  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:08.447009  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:08.882625  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:08.947221  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:09.382850  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:09.446568  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:09.883257  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:09.944475  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:10.382807  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:10.444761  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:10.882871  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:10.955238  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:11.382506  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:11.447100  100448 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:00:11.882348  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:00:11.949805  100448 command_runner.go:130] > NAME      SECRETS   AGE
	I1205 20:00:11.949825  100448 command_runner.go:130] > default   0         0s
	I1205 20:00:11.952117  100448 kubeadm.go:1088] duration metric: took 11.799207057s to wait for elevateKubeSystemPrivileges.
	I1205 20:00:11.952149  100448 kubeadm.go:406] StartCluster complete in 21.044643375s
	I1205 20:00:11.952172  100448 settings.go:142] acquiring lock: {Name:mkfaf26f24f59aefb8a41893ed2faf05d01ae7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:00:11.952273  100448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:00:11.953160  100448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/kubeconfig: {Name:mk1f41ec1ae8a6de6a6de4f641695e135340252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:00:11.953462  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:00:11.953481  100448 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:00:11.953559  100448 addons.go:69] Setting storage-provisioner=true in profile "multinode-340918"
	I1205 20:00:11.953581  100448 addons.go:231] Setting addon storage-provisioner=true in "multinode-340918"
	I1205 20:00:11.953579  100448 addons.go:69] Setting default-storageclass=true in profile "multinode-340918"
	I1205 20:00:11.953606  100448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-340918"
	I1205 20:00:11.953633  100448 host.go:66] Checking if "multinode-340918" exists ...
	I1205 20:00:11.953658  100448 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:00:11.953800  100448 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:00:11.954006  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 20:00:11.954174  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 20:00:11.954115  100448 kapi.go:59] client config for multinode-340918: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:00:11.954877  100448 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 20:00:11.955131  100448 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:00:11.955150  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:11.955160  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:11.955169  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:11.965209  100448 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 20:00:11.965254  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:11.965264  100448 round_trippers.go:580]     Audit-Id: dd86a434-4ddd-4c7b-a085-8e3dfc727197
	I1205 20:00:11.965272  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:11.965280  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:11.965287  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:11.965297  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:11.965311  100448 round_trippers.go:580]     Content-Length: 291
	I1205 20:00:11.965324  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:11 GMT
	I1205 20:00:11.965361  100448 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"49f4db43-89bb-40a9-adb1-a6e95567806b","resourceVersion":"314","creationTimestamp":"2023-12-05T19:59:59Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:00:11.965878  100448 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"49f4db43-89bb-40a9-adb1-a6e95567806b","resourceVersion":"314","creationTimestamp":"2023-12-05T19:59:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:00:11.965955  100448 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:00:11.965968  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:11.965980  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:11.965990  100448 round_trippers.go:473]     Content-Type: application/json
	I1205 20:00:11.966003  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:11.973213  100448 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:00:11.973234  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:11.973244  100448 round_trippers.go:580]     Audit-Id: 5eb68cd7-e242-4444-bb76-f0596319a96a
	I1205 20:00:11.973249  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:11.973255  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:11.973260  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:11.973265  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:11.973271  100448 round_trippers.go:580]     Content-Length: 291
	I1205 20:00:11.973276  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:11 GMT
	I1205 20:00:11.973296  100448 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"49f4db43-89bb-40a9-adb1-a6e95567806b","resourceVersion":"333","creationTimestamp":"2023-12-05T19:59:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:00:11.973446  100448 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:00:11.973460  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:11.973468  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:11.973476  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:11.976877  100448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:00:11.977634  100448 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:00:11.978799  100448 kapi.go:59] client config for multinode-340918: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:00:11.978987  100448 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:00:11.979008  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:11.979015  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:11 GMT
	I1205 20:00:11.979021  100448 round_trippers.go:580]     Audit-Id: 39eed521-f1b8-44fb-bc2b-548140143b18
	I1205 20:00:11.979026  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:11.979029  100448 addons.go:231] Setting addon default-storageclass=true in "multinode-340918"
	I1205 20:00:11.979032  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:11.979041  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:11.979047  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:11.979053  100448 round_trippers.go:580]     Content-Length: 291
	I1205 20:00:11.979054  100448 host.go:66] Checking if "multinode-340918" exists ...
	I1205 20:00:11.979086  100448 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"49f4db43-89bb-40a9-adb1-a6e95567806b","resourceVersion":"333","creationTimestamp":"2023-12-05T19:59:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:00:11.979191  100448 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340918" context rescaled to 1 replicas
	I1205 20:00:11.979229  100448 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:00:11.981310  100448 out.go:177] * Verifying Kubernetes components...
	I1205 20:00:11.979405  100448 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 20:00:11.979465  100448 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:00:11.982960  100448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:00:11.983023  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 20:00:11.983022  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:00:12.004274  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 20:00:12.005750  100448 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:00:12.005769  100448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:00:12.005822  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 20:00:12.023106  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 20:00:12.054680  100448 command_runner.go:130] > apiVersion: v1
	I1205 20:00:12.054706  100448 command_runner.go:130] > data:
	I1205 20:00:12.054713  100448 command_runner.go:130] >   Corefile: |
	I1205 20:00:12.054720  100448 command_runner.go:130] >     .:53 {
	I1205 20:00:12.054726  100448 command_runner.go:130] >         errors
	I1205 20:00:12.054734  100448 command_runner.go:130] >         health {
	I1205 20:00:12.054740  100448 command_runner.go:130] >            lameduck 5s
	I1205 20:00:12.054744  100448 command_runner.go:130] >         }
	I1205 20:00:12.054748  100448 command_runner.go:130] >         ready
	I1205 20:00:12.054754  100448 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1205 20:00:12.054759  100448 command_runner.go:130] >            pods insecure
	I1205 20:00:12.054765  100448 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1205 20:00:12.054769  100448 command_runner.go:130] >            ttl 30
	I1205 20:00:12.054775  100448 command_runner.go:130] >         }
	I1205 20:00:12.054782  100448 command_runner.go:130] >         prometheus :9153
	I1205 20:00:12.054790  100448 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1205 20:00:12.054795  100448 command_runner.go:130] >            max_concurrent 1000
	I1205 20:00:12.054804  100448 command_runner.go:130] >         }
	I1205 20:00:12.054811  100448 command_runner.go:130] >         cache 30
	I1205 20:00:12.054820  100448 command_runner.go:130] >         loop
	I1205 20:00:12.054827  100448 command_runner.go:130] >         reload
	I1205 20:00:12.054837  100448 command_runner.go:130] >         loadbalance
	I1205 20:00:12.054843  100448 command_runner.go:130] >     }
	I1205 20:00:12.054853  100448 command_runner.go:130] > kind: ConfigMap
	I1205 20:00:12.054860  100448 command_runner.go:130] > metadata:
	I1205 20:00:12.054876  100448 command_runner.go:130] >   creationTimestamp: "2023-12-05T19:59:59Z"
	I1205 20:00:12.054886  100448 command_runner.go:130] >   name: coredns
	I1205 20:00:12.054893  100448 command_runner.go:130] >   namespace: kube-system
	I1205 20:00:12.054900  100448 command_runner.go:130] >   resourceVersion: "229"
	I1205 20:00:12.054909  100448 command_runner.go:130] >   uid: 8f097181-4bb5-4cfc-8fde-3a130711cc0c
	I1205 20:00:12.055082  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:00:12.055353  100448 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:00:12.055679  100448 kapi.go:59] client config for multinode-340918: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:00:12.056010  100448 node_ready.go:35] waiting up to 6m0s for node "multinode-340918" to be "Ready" ...
	I1205 20:00:12.056105  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:12.056115  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:12.056128  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:12.056141  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:12.060349  100448 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:00:12.060384  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:12.060396  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:12.060405  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:12.060413  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:12 GMT
	I1205 20:00:12.060422  100448 round_trippers.go:580]     Audit-Id: 5082e113-8f2b-4f33-8485-d81a071b9e3a
	I1205 20:00:12.060429  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:12.060451  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:12.060582  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:12.061482  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:12.061511  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:12.061521  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:12.061530  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:12.065298  100448 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:00:12.065322  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:12.065331  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:12.065340  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:12.065348  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:12.065357  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:12.065370  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:12 GMT
	I1205 20:00:12.065381  100448 round_trippers.go:580]     Audit-Id: c3978f1d-38a8-4173-a760-5af32a73b565
	I1205 20:00:12.065539  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:12.150008  100448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:00:12.244118  100448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:00:12.566775  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:12.566797  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:12.566805  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:12.566812  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:12.627265  100448 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I1205 20:00:12.627297  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:12.627309  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:12.627319  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:12.627328  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:12.627337  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:12 GMT
	I1205 20:00:12.627346  100448 round_trippers.go:580]     Audit-Id: d1e19da7-48da-4af2-afe1-df1b8e5d3015
	I1205 20:00:12.627361  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:12.627534  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:12.748171  100448 command_runner.go:130] > configmap/coredns replaced
	I1205 20:00:12.748238  100448 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1205 20:00:12.997042  100448 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1205 20:00:13.003689  100448 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1205 20:00:13.010639  100448 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1205 20:00:13.017672  100448 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1205 20:00:13.027736  100448 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1205 20:00:13.037104  100448 command_runner.go:130] > pod/storage-provisioner created
	I1205 20:00:13.041387  100448 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1205 20:00:13.041521  100448 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:00:13.041535  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:13.041546  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:13.041558  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:13.043632  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:13.043655  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:13.043666  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:13.043672  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:13.043677  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:13.043683  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:13.043691  100448 round_trippers.go:580]     Content-Length: 1273
	I1205 20:00:13.043697  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:13 GMT
	I1205 20:00:13.043704  100448 round_trippers.go:580]     Audit-Id: 72bcd857-a4f4-4adf-9409-15d5b68b7956
	I1205 20:00:13.043759  100448 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"standard","uid":"4e1bcee2-6b18-4260-b1b5-25b891928e60","resourceVersion":"361","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1205 20:00:13.044092  100448 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4e1bcee2-6b18-4260-b1b5-25b891928e60","resourceVersion":"361","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1205 20:00:13.044131  100448 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:00:13.044138  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:13.044145  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:13.044154  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:13.044160  100448 round_trippers.go:473]     Content-Type: application/json
	I1205 20:00:13.046756  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:13.046778  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:13.046788  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:13.046797  100448 round_trippers.go:580]     Content-Length: 1220
	I1205 20:00:13.046805  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:13 GMT
	I1205 20:00:13.046813  100448 round_trippers.go:580]     Audit-Id: c617b60c-8a3a-48a1-b8c7-914a115148c9
	I1205 20:00:13.046829  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:13.046837  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:13.046850  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:13.046879  100448 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4e1bcee2-6b18-4260-b1b5-25b891928e60","resourceVersion":"361","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1205 20:00:13.049801  100448 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 20:00:13.051749  100448 addons.go:502] enable addons completed in 1.098268741s: enabled=[storage-provisioner default-storageclass]
	I1205 20:00:13.065978  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:13.065996  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:13.066003  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:13.066010  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:13.068315  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:13.068339  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:13.068350  100448 round_trippers.go:580]     Audit-Id: 629939bb-cae8-47d4-b09d-bd667b89f205
	I1205 20:00:13.068358  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:13.068366  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:13.068374  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:13.068385  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:13.068397  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:13 GMT
	I1205 20:00:13.068485  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:13.566088  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:13.566116  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:13.566124  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:13.566133  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:13.568436  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:13.568459  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:13.568466  100448 round_trippers.go:580]     Audit-Id: 066bf506-5bb7-4f28-87e5-ec71961aac50
	I1205 20:00:13.568472  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:13.568477  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:13.568484  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:13.568492  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:13.568497  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:13 GMT
	I1205 20:00:13.568674  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:14.066207  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:14.066235  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:14.066244  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:14.066250  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:14.068650  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:14.068671  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:14.068679  100448 round_trippers.go:580]     Audit-Id: 717bf8ef-6d93-4a81-a181-e1641c9a89c6
	I1205 20:00:14.068686  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:14.068694  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:14.068704  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:14.068712  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:14.068721  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:14 GMT
	I1205 20:00:14.068834  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:14.069304  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:14.566405  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:14.566428  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:14.566436  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:14.566442  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:14.568895  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:14.568919  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:14.568926  100448 round_trippers.go:580]     Audit-Id: 12e35538-33d2-4a69-92ab-d6949ecefb76
	I1205 20:00:14.568932  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:14.568937  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:14.568942  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:14.568953  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:14.568967  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:14 GMT
	I1205 20:00:14.569097  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:15.066554  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:15.066590  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:15.066599  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:15.066604  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:15.068965  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:15.068988  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:15.068997  100448 round_trippers.go:580]     Audit-Id: b1bc9156-052e-4558-b310-359b5f82b582
	I1205 20:00:15.069003  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:15.069008  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:15.069015  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:15.069024  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:15.069034  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:15 GMT
	I1205 20:00:15.069167  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:15.566749  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:15.566774  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:15.566782  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:15.566788  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:15.569276  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:15.569296  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:15.569303  100448 round_trippers.go:580]     Audit-Id: 9ba88889-4a5a-411f-be11-17d96d4f0205
	I1205 20:00:15.569309  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:15.569318  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:15.569329  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:15.569339  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:15.569347  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:15 GMT
	I1205 20:00:15.569454  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:16.065995  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:16.066019  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:16.066028  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:16.066034  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:16.068243  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:16.068261  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:16.068268  100448 round_trippers.go:580]     Audit-Id: 51eed878-bb18-44bf-bd2c-4a7b06b1eefb
	I1205 20:00:16.068273  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:16.068281  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:16.068290  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:16.068298  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:16.068311  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:16 GMT
	I1205 20:00:16.068467  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:16.566067  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:16.566095  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:16.566105  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:16.566113  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:16.568298  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:16.568324  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:16.568335  100448 round_trippers.go:580]     Audit-Id: 79f35a30-7a85-46b8-9166-5e5ab51c0c59
	I1205 20:00:16.568343  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:16.568350  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:16.568363  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:16.568374  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:16.568382  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:16 GMT
	I1205 20:00:16.568536  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:16.568875  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:17.066688  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:17.066711  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:17.066724  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:17.066732  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:17.069056  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:17.069075  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:17.069081  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:17.069087  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:17.069092  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:17 GMT
	I1205 20:00:17.069110  100448 round_trippers.go:580]     Audit-Id: e29b25f1-50f7-46da-96c9-84a8ed664d41
	I1205 20:00:17.069122  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:17.069133  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:17.069295  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:17.566963  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:17.566989  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:17.566997  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:17.567003  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:17.569371  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:17.569393  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:17.569400  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:17.569406  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:17.569411  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:17 GMT
	I1205 20:00:17.569417  100448 round_trippers.go:580]     Audit-Id: 76e1843c-79fc-4adf-b149-648fc7af1a80
	I1205 20:00:17.569421  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:17.569426  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:17.569628  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:18.066518  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:18.066555  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:18.066562  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:18.066569  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:18.068905  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:18.068927  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:18.068936  100448 round_trippers.go:580]     Audit-Id: 23528871-dcc2-438d-8b4d-7ada11654363
	I1205 20:00:18.068943  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:18.068950  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:18.068959  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:18.068968  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:18.068980  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:18 GMT
	I1205 20:00:18.069136  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:18.566785  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:18.566810  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:18.566819  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:18.566825  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:18.569242  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:18.569269  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:18.569277  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:18 GMT
	I1205 20:00:18.569282  100448 round_trippers.go:580]     Audit-Id: d1cf8eb9-4422-4e53-a9eb-f6022c92020f
	I1205 20:00:18.569287  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:18.569292  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:18.569297  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:18.569303  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:18.569464  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:18.569928  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:19.067027  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:19.067070  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:19.067080  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:19.067088  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:19.069383  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:19.069407  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:19.069415  100448 round_trippers.go:580]     Audit-Id: 347a2f09-f677-4093-addd-3f2d3b7c1e3a
	I1205 20:00:19.069420  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:19.069426  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:19.069430  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:19.069436  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:19.069440  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:19 GMT
	I1205 20:00:19.069591  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:19.566172  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:19.566202  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:19.566222  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:19.566230  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:19.568828  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:19.568851  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:19.568862  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:19.568869  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:19.568876  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:19 GMT
	I1205 20:00:19.568884  100448 round_trippers.go:580]     Audit-Id: a78f5f51-9dbc-46f4-b0f5-c604ae9a35ba
	I1205 20:00:19.568892  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:19.568903  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:19.569022  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:20.066636  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:20.066662  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:20.066670  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:20.066676  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:20.069021  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:20.069048  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:20.069058  100448 round_trippers.go:580]     Audit-Id: d9243955-7a14-4a22-8d5b-1623cae31d04
	I1205 20:00:20.069068  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:20.069076  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:20.069085  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:20.069093  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:20.069106  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:20 GMT
	I1205 20:00:20.069254  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:20.566857  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:20.566881  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:20.566888  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:20.566895  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:20.569180  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:20.569202  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:20.569208  100448 round_trippers.go:580]     Audit-Id: e4cdfaef-d768-459c-968a-dd1a973c7330
	I1205 20:00:20.569214  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:20.569219  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:20.569224  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:20.569231  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:20.569236  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:20 GMT
	I1205 20:00:20.569374  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:21.067048  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:21.067071  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:21.067079  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:21.067085  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:21.069348  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:21.069370  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:21.069378  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:21.069383  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:21.069388  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:21.069393  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:21.069399  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:21 GMT
	I1205 20:00:21.069404  100448 round_trippers.go:580]     Audit-Id: cf10d7d7-78d9-4eed-9daf-04d73872ad68
	I1205 20:00:21.069565  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:21.069881  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:21.566160  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:21.566189  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:21.566202  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:21.566213  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:21.568699  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:21.568723  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:21.568733  100448 round_trippers.go:580]     Audit-Id: 971c7e3c-d2be-4d58-9856-8f77da4fdb6d
	I1205 20:00:21.568740  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:21.568748  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:21.568756  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:21.568765  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:21.568778  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:21 GMT
	I1205 20:00:21.568953  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:22.066724  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:22.066750  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:22.066765  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:22.066775  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:22.069132  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:22.069150  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:22.069156  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:22 GMT
	I1205 20:00:22.069161  100448 round_trippers.go:580]     Audit-Id: ccaf7226-9639-4015-aa7c-7689a3a49bfd
	I1205 20:00:22.069167  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:22.069172  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:22.069177  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:22.069182  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:22.069332  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:22.567031  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:22.567059  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:22.567077  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:22.567092  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:22.569610  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:22.569627  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:22.569635  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:22 GMT
	I1205 20:00:22.569641  100448 round_trippers.go:580]     Audit-Id: 500f6b28-f5b7-408b-b846-23e972854f16
	I1205 20:00:22.569646  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:22.569651  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:22.569661  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:22.569667  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:22.569842  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:23.066548  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:23.066582  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:23.066590  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:23.066597  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:23.068926  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:23.068946  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:23.068953  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:23.068959  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:23.068964  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:23.068969  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:23.068974  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:23 GMT
	I1205 20:00:23.068980  100448 round_trippers.go:580]     Audit-Id: 9c2b56ba-ceac-458b-bf0c-d12f246bf614
	I1205 20:00:23.069124  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:23.566804  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:23.566829  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:23.566837  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:23.566843  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:23.569245  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:23.569265  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:23.569274  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:23.569283  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:23 GMT
	I1205 20:00:23.569290  100448 round_trippers.go:580]     Audit-Id: 836e361a-4dbc-4bef-bc6c-4d7e32dedd59
	I1205 20:00:23.569298  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:23.569306  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:23.569319  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:23.569446  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:23.569757  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:24.066058  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:24.066099  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:24.066107  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:24.066113  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:24.068598  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:24.068620  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:24.068630  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:24.068637  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:24.068644  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:24 GMT
	I1205 20:00:24.068651  100448 round_trippers.go:580]     Audit-Id: b1ba77fd-6e36-480a-af8d-eaf90ff591ac
	I1205 20:00:24.068666  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:24.068678  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:24.068802  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:24.566344  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:24.566371  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:24.566379  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:24.566385  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:24.568718  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:24.568739  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:24.568746  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:24 GMT
	I1205 20:00:24.568751  100448 round_trippers.go:580]     Audit-Id: 0ccb54d9-133d-4983-b1af-14a6e5e3dbe2
	I1205 20:00:24.568758  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:24.568767  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:24.568779  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:24.568789  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:24.568924  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:25.066475  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:25.066500  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:25.066508  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:25.066515  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:25.068914  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:25.068940  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:25.068951  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:25.068959  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:25.068967  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:25 GMT
	I1205 20:00:25.068984  100448 round_trippers.go:580]     Audit-Id: eb2838e1-cfe7-4249-b1cf-84cb0f3ab525
	I1205 20:00:25.068991  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:25.068999  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:25.069100  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:25.566763  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:25.566789  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:25.566799  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:25.566805  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:25.569181  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:25.569205  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:25.569212  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:25.569218  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:25.569223  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:25 GMT
	I1205 20:00:25.569229  100448 round_trippers.go:580]     Audit-Id: 9c8c5b06-dca0-4be6-ad8c-a1d610aa3a48
	I1205 20:00:25.569234  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:25.569239  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:25.569437  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:26.066017  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:26.066043  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:26.066051  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:26.066060  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:26.068333  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:26.068441  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:26.068468  100448 round_trippers.go:580]     Audit-Id: ac45b72d-b0f5-4207-bfae-ca680ae1cf1c
	I1205 20:00:26.068479  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:26.068497  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:26.068512  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:26.068526  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:26.068539  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:26 GMT
	I1205 20:00:26.068725  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:26.069106  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:26.566126  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:26.566152  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:26.566166  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:26.566176  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:26.568582  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:26.568609  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:26.568618  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:26.568626  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:26.568633  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:26 GMT
	I1205 20:00:26.568642  100448 round_trippers.go:580]     Audit-Id: 42ae10fe-7fff-4930-be5e-349534ce1f31
	I1205 20:00:26.568650  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:26.568659  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:26.568865  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:27.066486  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:27.066512  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:27.066519  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:27.066526  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:27.068948  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:27.068963  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:27.068971  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:27.068976  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:27.068981  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:27.068988  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:27.069000  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:27 GMT
	I1205 20:00:27.069007  100448 round_trippers.go:580]     Audit-Id: f19c9b18-3cb6-4339-8f26-ec6081cabf6f
	I1205 20:00:27.069116  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:27.566783  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:27.566810  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:27.566818  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:27.566824  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:27.569297  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:27.569323  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:27.569333  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:27.569344  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:27.569353  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:27.569363  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:27 GMT
	I1205 20:00:27.569374  100448 round_trippers.go:580]     Audit-Id: 9ab5797b-9da5-4d85-9f3c-eeb3c5a6e0c9
	I1205 20:00:27.569390  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:27.569530  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:28.066399  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:28.066419  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:28.066426  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:28.066432  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:28.068732  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:28.068755  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:28.068764  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:28.068772  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:28.068780  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:28.068787  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:28 GMT
	I1205 20:00:28.068796  100448 round_trippers.go:580]     Audit-Id: b959a050-be99-4064-bc5c-b793e699b754
	I1205 20:00:28.068803  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:28.068964  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:28.069261  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:28.566645  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:28.566670  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:28.566679  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:28.566685  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:28.569009  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:28.569030  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:28.569037  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:28.569045  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:28.569050  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:28.569055  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:28.569060  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:28 GMT
	I1205 20:00:28.569065  100448 round_trippers.go:580]     Audit-Id: b0c06211-a717-4cc2-9731-1199e65ea0e4
	I1205 20:00:28.569250  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:29.066960  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:29.066985  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:29.066995  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:29.067003  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:29.069425  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:29.069446  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:29.069455  100448 round_trippers.go:580]     Audit-Id: 51ae1c73-8506-4eea-bea0-5f0222adb8dc
	I1205 20:00:29.069462  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:29.069470  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:29.069478  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:29.069486  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:29.069495  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:29 GMT
	I1205 20:00:29.069641  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:29.566250  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:29.566278  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:29.566286  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:29.566293  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:29.568718  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:29.568744  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:29.568754  100448 round_trippers.go:580]     Audit-Id: e83e471f-b08e-42ce-83c4-4554f5f9fb8f
	I1205 20:00:29.568762  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:29.568769  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:29.568777  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:29.568785  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:29.568794  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:29 GMT
	I1205 20:00:29.568943  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:30.066576  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:30.066598  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:30.066606  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:30.066618  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:30.068965  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:30.068993  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:30.069002  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:30.069010  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:30.069017  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:30.069025  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:30 GMT
	I1205 20:00:30.069041  100448 round_trippers.go:580]     Audit-Id: 12dd9caa-d3e0-4a1a-be36-936925c587c2
	I1205 20:00:30.069049  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:30.069165  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:30.069465  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:30.566855  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:30.566885  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:30.566896  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:30.566906  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:30.569276  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:30.569302  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:30.569309  100448 round_trippers.go:580]     Audit-Id: 81fa8475-6238-4a3a-897a-92d266f1512e
	I1205 20:00:30.569315  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:30.569321  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:30.569326  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:30.569331  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:30.569337  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:30 GMT
	I1205 20:00:30.569474  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:31.066116  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:31.066141  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:31.066152  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:31.066160  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:31.068616  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:31.068644  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:31.068652  100448 round_trippers.go:580]     Audit-Id: da81d8e5-2e39-4dc6-8996-ce481cee49d9
	I1205 20:00:31.068664  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:31.068670  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:31.068675  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:31.068681  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:31.068687  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:31 GMT
	I1205 20:00:31.068804  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:31.566277  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:31.566302  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:31.566311  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:31.566317  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:31.568688  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:31.568712  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:31.568722  100448 round_trippers.go:580]     Audit-Id: 5153488a-9e6d-4d8d-92ed-b5a69c8ccdd5
	I1205 20:00:31.568730  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:31.568739  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:31.568747  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:31.568760  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:31.568767  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:31 GMT
	I1205 20:00:31.568942  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:32.066421  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:32.066449  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:32.066459  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:32.066467  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:32.068840  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:32.068867  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:32.068877  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:32.068886  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:32 GMT
	I1205 20:00:32.068893  100448 round_trippers.go:580]     Audit-Id: 45d9b6f8-b0b4-44c2-893d-078f7aec0f72
	I1205 20:00:32.068901  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:32.068910  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:32.068924  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:32.069067  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:32.069488  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:32.566769  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:32.566799  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:32.566810  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:32.566820  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:32.569150  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:32.569175  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:32.569183  100448 round_trippers.go:580]     Audit-Id: 0b68c160-5d97-4307-a806-b80fa9eb4c9a
	I1205 20:00:32.569189  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:32.569194  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:32.569200  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:32.569205  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:32.569210  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:32 GMT
	I1205 20:00:32.569380  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:33.066261  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:33.066287  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:33.066295  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:33.066302  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:33.069831  100448 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:00:33.069860  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:33.069870  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:33.069876  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:33.069883  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:33.069891  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:33 GMT
	I1205 20:00:33.069899  100448 round_trippers.go:580]     Audit-Id: 944ab0fc-a74b-4ce0-896e-bb8a23caadd1
	I1205 20:00:33.069907  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:33.070068  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:33.566778  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:33.566807  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:33.566817  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:33.566826  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:33.569136  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:33.569161  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:33.569169  100448 round_trippers.go:580]     Audit-Id: cf365351-7f2b-45c7-9d52-ddc60818d64f
	I1205 20:00:33.569175  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:33.569180  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:33.569185  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:33.569190  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:33.569195  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:33 GMT
	I1205 20:00:33.569314  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:34.067022  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:34.067046  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:34.067054  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:34.067060  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:34.070259  100448 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:00:34.070330  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:34.070351  100448 round_trippers.go:580]     Audit-Id: cb3f8fca-8b00-44f7-9d19-cadf304dd8d4
	I1205 20:00:34.070369  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:34.070385  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:34.070401  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:34.070425  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:34.070442  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:34 GMT
	I1205 20:00:34.071066  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:34.071620  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:34.566725  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:34.566746  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:34.566754  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:34.566761  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:34.568986  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:34.569005  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:34.569011  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:34.569017  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:34 GMT
	I1205 20:00:34.569024  100448 round_trippers.go:580]     Audit-Id: a1164f96-8725-4d5d-96df-c00e854bdd1d
	I1205 20:00:34.569032  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:34.569043  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:34.569056  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:34.569205  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:35.066936  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:35.066963  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:35.066970  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:35.066976  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:35.069370  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:35.069395  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:35.069405  100448 round_trippers.go:580]     Audit-Id: bd21d249-65b6-4505-b893-b7e535551cad
	I1205 20:00:35.069419  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:35.069431  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:35.069441  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:35.069451  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:35.069464  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:35 GMT
	I1205 20:00:35.069605  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:35.567026  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:35.567049  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:35.567057  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:35.567062  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:35.569530  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:35.569551  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:35.569560  100448 round_trippers.go:580]     Audit-Id: 985636be-010e-4eab-86f5-a261d5a81793
	I1205 20:00:35.569567  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:35.569575  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:35.569582  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:35.569591  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:35.569602  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:35 GMT
	I1205 20:00:35.569765  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:36.066296  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:36.066323  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:36.066334  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:36.066342  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:36.068918  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:36.068943  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:36.068952  100448 round_trippers.go:580]     Audit-Id: fcd6246b-7acf-45d9-9fe8-ea15557ce593
	I1205 20:00:36.068959  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:36.068967  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:36.068975  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:36.068984  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:36.068994  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:36 GMT
	I1205 20:00:36.069162  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:36.566834  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:36.566859  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:36.566867  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:36.566873  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:36.569194  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:36.569213  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:36.569220  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:36.569225  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:36.569230  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:36.569239  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:36 GMT
	I1205 20:00:36.569247  100448 round_trippers.go:580]     Audit-Id: fda03c26-83e6-4933-9575-b9f60f264f41
	I1205 20:00:36.569257  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:36.569407  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:36.569711  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:37.066019  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:37.066045  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:37.066053  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:37.066060  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:37.068376  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:37.068397  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:37.068404  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:37 GMT
	I1205 20:00:37.068411  100448 round_trippers.go:580]     Audit-Id: 99132c88-ff92-4b1e-9b6e-85ed5d8766bc
	I1205 20:00:37.068420  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:37.068428  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:37.068436  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:37.068447  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:37.068606  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:37.566099  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:37.566121  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:37.566135  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:37.566141  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:37.568493  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:37.568518  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:37.568528  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:37 GMT
	I1205 20:00:37.568536  100448 round_trippers.go:580]     Audit-Id: a52e039c-3ad5-486b-9905-5a224190e840
	I1205 20:00:37.568546  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:37.568551  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:37.568558  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:37.568567  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:37.568733  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:38.066793  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:38.066816  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:38.066824  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:38.066830  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:38.069202  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:38.069226  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:38.069234  100448 round_trippers.go:580]     Audit-Id: d31a9c62-790f-47de-818d-c1cf8e8a1315
	I1205 20:00:38.069241  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:38.069249  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:38.069260  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:38.069269  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:38.069276  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:38 GMT
	I1205 20:00:38.069414  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:38.566195  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:38.566222  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:38.566230  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:38.566237  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:38.568564  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:38.568584  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:38.568590  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:38.568595  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:38.568601  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:38.568610  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:38 GMT
	I1205 20:00:38.568618  100448 round_trippers.go:580]     Audit-Id: baa52957-d849-4080-a96f-056c92ed292c
	I1205 20:00:38.568626  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:38.568755  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:39.066311  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:39.066336  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:39.066345  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:39.066351  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:39.068687  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:39.068711  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:39.068720  100448 round_trippers.go:580]     Audit-Id: cf4069cc-488b-4e93-9e78-515b2872810d
	I1205 20:00:39.068728  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:39.068736  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:39.068744  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:39.068757  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:39.068766  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:39 GMT
	I1205 20:00:39.068889  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:39.069178  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:39.566469  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:39.566497  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:39.566517  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:39.566525  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:39.568888  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:39.568915  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:39.568925  100448 round_trippers.go:580]     Audit-Id: 71bb20fd-e47a-4527-91e7-b46acc23d984
	I1205 20:00:39.568932  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:39.568937  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:39.568944  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:39.568949  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:39.568955  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:39 GMT
	I1205 20:00:39.569099  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:40.066711  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:40.066742  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:40.066755  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:40.066763  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:40.069103  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:40.069130  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:40.069154  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:40.069163  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:40.069172  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:40.069181  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:40 GMT
	I1205 20:00:40.069192  100448 round_trippers.go:580]     Audit-Id: 9bc00cb7-6269-46ea-a523-871801795a4f
	I1205 20:00:40.069205  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:40.069361  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:40.566905  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:40.566932  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:40.566940  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:40.566946  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:40.569318  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:40.569340  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:40.569350  100448 round_trippers.go:580]     Audit-Id: 03d35402-05fe-4e64-8f8a-5771d0e0891c
	I1205 20:00:40.569358  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:40.569366  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:40.569373  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:40.569384  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:40.569400  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:40 GMT
	I1205 20:00:40.569523  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:41.066100  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:41.066131  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:41.066142  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:41.066150  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:41.068482  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:41.068550  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:41.068563  100448 round_trippers.go:580]     Audit-Id: a0cb5764-df5d-4f45-b871-ba3b55ebd769
	I1205 20:00:41.068571  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:41.068592  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:41.068601  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:41.068609  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:41.068619  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:41 GMT
	I1205 20:00:41.068761  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:41.566245  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:41.566276  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:41.566284  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:41.566291  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:41.568556  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:41.568582  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:41.568591  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:41.568599  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:41.568607  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:41.568614  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:41.568625  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:41 GMT
	I1205 20:00:41.568634  100448 round_trippers.go:580]     Audit-Id: ccc87fba-2e58-4d76-a43e-dbae14f2f6ff
	I1205 20:00:41.568771  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:41.569102  100448 node_ready.go:58] node "multinode-340918" has status "Ready":"False"
	I1205 20:00:42.066316  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:42.066340  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:42.066348  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:42.066354  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:42.068690  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:42.068714  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:42.068723  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:42.068731  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:42.068739  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:42.068747  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:42.068755  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:42 GMT
	I1205 20:00:42.068764  100448 round_trippers.go:580]     Audit-Id: 500b146d-923d-41fa-b26f-138a940dc27a
	I1205 20:00:42.068916  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:42.566544  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:42.566588  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:42.566596  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:42.566603  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:42.568981  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:42.569008  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:42.569018  100448 round_trippers.go:580]     Audit-Id: 1285cfa1-ff02-4b7a-9b40-52be61f4cd7b
	I1205 20:00:42.569025  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:42.569033  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:42.569041  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:42.569049  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:42.569061  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:42 GMT
	I1205 20:00:42.569224  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:43.066711  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:43.066729  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.066737  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.066743  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.068980  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:43.069015  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.069025  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.069034  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.069042  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.069052  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.069061  100448 round_trippers.go:580]     Audit-Id: 6546d942-52c6-4b1a-b7bd-a9fc73067f15
	I1205 20:00:43.069072  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.069211  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"317","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1205 20:00:43.566824  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:43.566854  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.566862  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.566868  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.569030  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:43.569050  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.569057  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.569063  100448 round_trippers.go:580]     Audit-Id: f9bb10fc-00ca-42ec-88e0-565d3db5c174
	I1205 20:00:43.569068  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.569076  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.569083  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.569091  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.569228  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:43.569655  100448 node_ready.go:49] node "multinode-340918" has status "Ready":"True"
	I1205 20:00:43.569678  100448 node_ready.go:38] duration metric: took 31.513644754s waiting for node "multinode-340918" to be "Ready" ...
	I1205 20:00:43.569689  100448 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:00:43.569781  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:00:43.569793  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.569804  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.569815  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.572957  100448 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:00:43.572979  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.572986  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.572992  100448 round_trippers.go:580]     Audit-Id: a7138647-440c-498c-8b48-57ecd87046aa
	I1205 20:00:43.572997  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.573006  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.573012  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.573017  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.573470  100448 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"392"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"392","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1205 20:00:43.576489  100448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-skz8t" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:43.576559  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skz8t
	I1205 20:00:43.576567  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.576574  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.576580  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.578737  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:43.578761  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.578771  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.578779  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.578786  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.578798  100448 round_trippers.go:580]     Audit-Id: da2c1f3f-956a-40be-a64e-41ff4ed84871
	I1205 20:00:43.578815  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.578824  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.578926  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"392","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:00:43.579318  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:43.579330  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.579338  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.579344  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.581258  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:43.581273  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.581279  100448 round_trippers.go:580]     Audit-Id: 0151a08b-2f2b-48c7-b357-3258e48c3508
	I1205 20:00:43.581285  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.581290  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.581295  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.581300  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.581305  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.581463  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:43.581813  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skz8t
	I1205 20:00:43.581826  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.581833  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.581839  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.583618  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:43.583636  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.583645  100448 round_trippers.go:580]     Audit-Id: e9b7e0f0-770c-456d-8786-cf026f1fa800
	I1205 20:00:43.583652  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.583659  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.583667  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.583685  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.583697  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.583843  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"392","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:00:43.584293  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:43.584308  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:43.584315  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:43.584321  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:43.586056  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:43.586076  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:43.586086  100448 round_trippers.go:580]     Audit-Id: 4dad25ad-a534-42bd-8711-7c6e2c6cc0e0
	I1205 20:00:43.586095  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:43.586105  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:43.586114  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:43.586120  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:43.586134  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:43 GMT
	I1205 20:00:43.586245  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:44.087402  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skz8t
	I1205 20:00:44.087429  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.087437  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.087444  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.089805  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:44.089828  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.089834  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.089840  100448 round_trippers.go:580]     Audit-Id: c3207a23-0a35-44fb-a749-1c1d965847c6
	I1205 20:00:44.089845  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.089850  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.089855  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.089862  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.090030  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"392","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:00:44.090505  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:44.090523  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.090533  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.090541  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.092564  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:44.092581  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.092591  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.092599  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.092606  100448 round_trippers.go:580]     Audit-Id: 23b1ff9b-3372-4566-96cd-f5f3d799d831
	I1205 20:00:44.092613  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.092623  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.092637  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.092807  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:44.587392  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skz8t
	I1205 20:00:44.587419  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.587427  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.587433  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.589913  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:44.589939  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.589949  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.589959  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.589967  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.589977  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.589988  100448 round_trippers.go:580]     Audit-Id: 341efb83-a56f-4a64-808b-2b53a454ccdd
	I1205 20:00:44.589996  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.590110  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"405","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:00:44.590657  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:44.590676  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.590687  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.590696  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.593016  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:44.593035  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.593042  100448 round_trippers.go:580]     Audit-Id: 61e98c37-a7f5-4e0d-8efa-aa49e6edb908
	I1205 20:00:44.593050  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.593059  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.593070  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.593083  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.593092  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.593222  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:44.593553  100448 pod_ready.go:92] pod "coredns-5dd5756b68-skz8t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:00:44.593573  100448 pod_ready.go:81] duration metric: took 1.017063519s waiting for pod "coredns-5dd5756b68-skz8t" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.593583  100448 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.593647  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340918
	I1205 20:00:44.593657  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.593664  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.593670  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.595552  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:44.595586  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.595595  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.595604  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.595611  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.595619  100448 round_trippers.go:580]     Audit-Id: 20ddcf25-f7b2-443d-a1f7-0a77e0a408b3
	I1205 20:00:44.595634  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.595653  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.595727  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340918","namespace":"kube-system","uid":"60f35cfd-060d-4cda-b6a6-f5ee1936b68d","resourceVersion":"295","creationTimestamp":"2023-12-05T19:59:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"807a07541f985b83a4185dbe9a49fec6","kubernetes.io/config.mirror":"807a07541f985b83a4185dbe9a49fec6","kubernetes.io/config.seen":"2023-12-05T19:59:59.280246964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:00:44.596070  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:44.596084  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.596094  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.596103  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.597812  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:44.597834  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.597844  100448 round_trippers.go:580]     Audit-Id: fcebb071-8f98-490a-9ee5-dbe6e1e0a68d
	I1205 20:00:44.597853  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.597860  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.597871  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.597887  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.597896  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.598043  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:44.598373  100448 pod_ready.go:92] pod "etcd-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:00:44.598388  100448 pod_ready.go:81] duration metric: took 4.791717ms waiting for pod "etcd-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.598398  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.598445  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340918
	I1205 20:00:44.598460  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.598467  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.598473  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.600266  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:44.600287  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.600298  100448 round_trippers.go:580]     Audit-Id: 0729819e-12a1-45e1-b5eb-458f8d7851c7
	I1205 20:00:44.600308  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.600317  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.600327  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.600341  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.600354  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.600480  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340918","namespace":"kube-system","uid":"c5e52362-7444-45f8-8dcf-0ceeb08f7f88","resourceVersion":"293","creationTimestamp":"2023-12-05T19:59:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"88af118c95fa254a23ecebf6b6604eb4","kubernetes.io/config.mirror":"88af118c95fa254a23ecebf6b6604eb4","kubernetes.io/config.seen":"2023-12-05T19:59:59.280238446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1205 20:00:44.600934  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:44.600955  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.600962  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.600968  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.602486  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:44.602501  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.602509  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.602514  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.602520  100448 round_trippers.go:580]     Audit-Id: 18257ab2-717a-476a-b425-281c4708d48a
	I1205 20:00:44.602528  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.602536  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.602548  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.602689  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:44.602991  100448 pod_ready.go:92] pod "kube-apiserver-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:00:44.603010  100448 pod_ready.go:81] duration metric: took 4.604073ms waiting for pod "kube-apiserver-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.603023  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.603068  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340918
	I1205 20:00:44.603086  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.603093  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.603102  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.604783  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:00:44.604804  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.604812  100448 round_trippers.go:580]     Audit-Id: d22f218e-df20-4aed-8aa3-1b0320579115
	I1205 20:00:44.604818  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.604824  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.604838  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.604847  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.604855  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.605001  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340918","namespace":"kube-system","uid":"dc52ff14-6ae5-43bd-b80f-774a5fae4fb3","resourceVersion":"275","creationTimestamp":"2023-12-05T19:59:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fe502b0fa0c4a310ff725f7f4d82494e","kubernetes.io/config.mirror":"fe502b0fa0c4a310ff725f7f4d82494e","kubernetes.io/config.seen":"2023-12-05T19:59:53.273068058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1205 20:00:44.767705  100448 request.go:629] Waited for 162.312531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:44.767773  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:44.767779  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.767786  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.767794  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.769983  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:44.770006  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.770016  100448 round_trippers.go:580]     Audit-Id: 4326aa49-9255-43e6-b63d-d468521cd68e
	I1205 20:00:44.770023  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.770031  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.770038  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.770045  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.770054  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.770176  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:44.770508  100448 pod_ready.go:92] pod "kube-controller-manager-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:00:44.770522  100448 pod_ready.go:81] duration metric: took 167.489288ms waiting for pod "kube-controller-manager-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.770533  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzfjz" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:44.966922  100448 request.go:629] Waited for 196.331714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kzfjz
	I1205 20:00:44.967012  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kzfjz
	I1205 20:00:44.967023  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:44.967041  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:44.967055  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:44.969460  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:44.969489  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:44.969500  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:44 GMT
	I1205 20:00:44.969508  100448 round_trippers.go:580]     Audit-Id: 409864ba-f0ab-4117-a309-3b8bdd56d0c0
	I1205 20:00:44.969517  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:44.969526  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:44.969535  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:44.969547  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:44.969772  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kzfjz","generateName":"kube-proxy-","namespace":"kube-system","uid":"78fc1f07-e92e-4a48-a04c-62cc7cea5435","resourceVersion":"373","creationTimestamp":"2023-12-05T20:00:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b4f45069-b28b-42ec-8716-60005d2e7302","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b4f45069-b28b-42ec-8716-60005d2e7302\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:00:45.167568  100448 request.go:629] Waited for 197.34731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:45.167642  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:45.167652  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:45.167670  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:45.167680  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:45.169917  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:45.169936  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:45.169943  100448 round_trippers.go:580]     Audit-Id: dda68d2d-e0ed-44ac-ba2a-c534c01be99d
	I1205 20:00:45.169949  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:45.169954  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:45.169959  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:45.169965  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:45.169973  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:45 GMT
	I1205 20:00:45.170080  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:45.170407  100448 pod_ready.go:92] pod "kube-proxy-kzfjz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:00:45.170426  100448 pod_ready.go:81] duration metric: took 399.887056ms waiting for pod "kube-proxy-kzfjz" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:45.170435  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:45.367892  100448 request.go:629] Waited for 197.390091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340918
	I1205 20:00:45.367981  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340918
	I1205 20:00:45.367995  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:45.368014  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:45.368031  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:45.370283  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:45.370302  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:45.370309  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:45.370314  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:45.370319  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:45.370325  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:45 GMT
	I1205 20:00:45.370332  100448 round_trippers.go:580]     Audit-Id: f9afab78-43dd-4692-8578-e02b7081f98a
	I1205 20:00:45.370340  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:45.370507  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340918","namespace":"kube-system","uid":"249b098e-76fa-4946-b7e4-82846c7c7220","resourceVersion":"271","creationTimestamp":"2023-12-05T19:59:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3ff8214b1941928379220b4b7e0a1487","kubernetes.io/config.mirror":"3ff8214b1941928379220b4b7e0a1487","kubernetes.io/config.seen":"2023-12-05T19:59:59.280245687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:00:45.567242  100448 request.go:629] Waited for 196.348308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:45.567312  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:00:45.567320  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:45.567332  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:45.567346  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:45.569590  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:45.569608  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:45.569615  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:45.569620  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:45 GMT
	I1205 20:00:45.569625  100448 round_trippers.go:580]     Audit-Id: 3c83d630-6651-4dec-adb4-f0509b7120eb
	I1205 20:00:45.569630  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:45.569635  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:45.569641  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:45.569751  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:00:45.570117  100448 pod_ready.go:92] pod "kube-scheduler-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:00:45.570138  100448 pod_ready.go:81] duration metric: took 399.696029ms waiting for pod "kube-scheduler-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:00:45.570151  100448 pod_ready.go:38] duration metric: took 2.000430967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:00:45.570175  100448 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:00:45.570232  100448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:00:45.579869  100448 command_runner.go:130] > 1416
	I1205 20:00:45.580586  100448 api_server.go:72] duration metric: took 33.601334259s to wait for apiserver process to appear ...
	I1205 20:00:45.580600  100448 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:00:45.580614  100448 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1205 20:00:45.584571  100448 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1205 20:00:45.584629  100448 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1205 20:00:45.584637  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:45.584645  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:45.584653  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:45.585527  100448 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:00:45.585538  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:45.585544  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:45.585550  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:45.585555  100448 round_trippers.go:580]     Content-Length: 264
	I1205 20:00:45.585560  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:45 GMT
	I1205 20:00:45.585565  100448 round_trippers.go:580]     Audit-Id: 56cc7378-734e-4627-bcf2-f841c1600a4e
	I1205 20:00:45.585577  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:45.585584  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:45.585599  100448 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1205 20:00:45.585693  100448 api_server.go:141] control plane version: v1.28.4
	I1205 20:00:45.585710  100448 api_server.go:131] duration metric: took 5.105879ms to wait for apiserver health ...
	I1205 20:00:45.585716  100448 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:00:45.767092  100448 request.go:629] Waited for 181.319197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:00:45.767184  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:00:45.767195  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:45.767209  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:45.767223  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:45.770358  100448 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:00:45.770385  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:45.770395  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:45.770405  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:45 GMT
	I1205 20:00:45.770414  100448 round_trippers.go:580]     Audit-Id: 2b953a87-2aed-40db-a134-ee6c61ffb3e7
	I1205 20:00:45.770423  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:45.770435  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:45.770444  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:45.771027  100448 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"405","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1205 20:00:45.773468  100448 system_pods.go:59] 8 kube-system pods found
	I1205 20:00:45.773494  100448 system_pods.go:61] "coredns-5dd5756b68-skz8t" [d21b0f8e-2cfc-4fdf-a923-b997fb927fbe] Running
	I1205 20:00:45.773501  100448 system_pods.go:61] "etcd-multinode-340918" [60f35cfd-060d-4cda-b6a6-f5ee1936b68d] Running
	I1205 20:00:45.773508  100448 system_pods.go:61] "kindnet-h9575" [5a47313d-f97d-4de3-9298-5aeee7cc15e9] Running
	I1205 20:00:45.773517  100448 system_pods.go:61] "kube-apiserver-multinode-340918" [c5e52362-7444-45f8-8dcf-0ceeb08f7f88] Running
	I1205 20:00:45.773532  100448 system_pods.go:61] "kube-controller-manager-multinode-340918" [dc52ff14-6ae5-43bd-b80f-774a5fae4fb3] Running
	I1205 20:00:45.773539  100448 system_pods.go:61] "kube-proxy-kzfjz" [78fc1f07-e92e-4a48-a04c-62cc7cea5435] Running
	I1205 20:00:45.773549  100448 system_pods.go:61] "kube-scheduler-multinode-340918" [249b098e-76fa-4946-b7e4-82846c7c7220] Running
	I1205 20:00:45.773558  100448 system_pods.go:61] "storage-provisioner" [178fdf74-f6b5-4bfd-8c4e-7511303ab9c2] Running
	I1205 20:00:45.773569  100448 system_pods.go:74] duration metric: took 187.846831ms to wait for pod list to return data ...
	I1205 20:00:45.773578  100448 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:00:45.966922  100448 request.go:629] Waited for 193.272485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:00:45.967003  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:00:45.967014  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:45.967027  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:45.967039  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:45.969523  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:45.969548  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:45.969556  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:45.969562  100448 round_trippers.go:580]     Content-Length: 261
	I1205 20:00:45.969569  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:45 GMT
	I1205 20:00:45.969577  100448 round_trippers.go:580]     Audit-Id: 6225ab48-2682-4880-844d-c5cf8d5835dc
	I1205 20:00:45.969586  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:45.969598  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:45.969609  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:45.969636  100448 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"56ebf054-7bbe-483e-8a75-4bc9a297c8f3","resourceVersion":"296","creationTimestamp":"2023-12-05T20:00:11Z"}}]}
	I1205 20:00:45.969883  100448 default_sa.go:45] found service account: "default"
	I1205 20:00:45.969905  100448 default_sa.go:55] duration metric: took 196.318816ms for default service account to be created ...
	I1205 20:00:45.969915  100448 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:00:46.167349  100448 request.go:629] Waited for 197.365117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:00:46.167427  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:00:46.167439  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:46.167458  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:46.167472  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:46.170648  100448 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:00:46.170675  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:46.170682  100448 round_trippers.go:580]     Audit-Id: 0ab8b17d-dfc1-48da-9eee-e12abf812828
	I1205 20:00:46.170688  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:46.170693  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:46.170698  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:46.170706  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:46.170715  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:46 GMT
	I1205 20:00:46.171256  100448 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"405","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1205 20:00:46.173350  100448 system_pods.go:86] 8 kube-system pods found
	I1205 20:00:46.173369  100448 system_pods.go:89] "coredns-5dd5756b68-skz8t" [d21b0f8e-2cfc-4fdf-a923-b997fb927fbe] Running
	I1205 20:00:46.173374  100448 system_pods.go:89] "etcd-multinode-340918" [60f35cfd-060d-4cda-b6a6-f5ee1936b68d] Running
	I1205 20:00:46.173378  100448 system_pods.go:89] "kindnet-h9575" [5a47313d-f97d-4de3-9298-5aeee7cc15e9] Running
	I1205 20:00:46.173383  100448 system_pods.go:89] "kube-apiserver-multinode-340918" [c5e52362-7444-45f8-8dcf-0ceeb08f7f88] Running
	I1205 20:00:46.173387  100448 system_pods.go:89] "kube-controller-manager-multinode-340918" [dc52ff14-6ae5-43bd-b80f-774a5fae4fb3] Running
	I1205 20:00:46.173391  100448 system_pods.go:89] "kube-proxy-kzfjz" [78fc1f07-e92e-4a48-a04c-62cc7cea5435] Running
	I1205 20:00:46.173395  100448 system_pods.go:89] "kube-scheduler-multinode-340918" [249b098e-76fa-4946-b7e4-82846c7c7220] Running
	I1205 20:00:46.173399  100448 system_pods.go:89] "storage-provisioner" [178fdf74-f6b5-4bfd-8c4e-7511303ab9c2] Running
	I1205 20:00:46.173406  100448 system_pods.go:126] duration metric: took 203.486453ms to wait for k8s-apps to be running ...
	I1205 20:00:46.173414  100448 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:00:46.173456  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:00:46.187415  100448 system_svc.go:56] duration metric: took 13.987197ms WaitForService to wait for kubelet.
	I1205 20:00:46.187445  100448 kubeadm.go:581] duration metric: took 34.208193155s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:00:46.187470  100448 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:00:46.366819  100448 request.go:629] Waited for 179.272092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1205 20:00:46.366914  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1205 20:00:46.366926  100448 round_trippers.go:469] Request Headers:
	I1205 20:00:46.366938  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:00:46.366953  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:00:46.369518  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:00:46.369541  100448 round_trippers.go:577] Response Headers:
	I1205 20:00:46.369550  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:00:46.369557  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:00:46 GMT
	I1205 20:00:46.369564  100448 round_trippers.go:580]     Audit-Id: 68e00a65-5ac6-412a-810e-f4652ddce0cc
	I1205 20:00:46.369572  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:00:46.369588  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:00:46.369595  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:00:46.369715  100448 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1205 20:00:46.370092  100448 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 20:00:46.370110  100448 node_conditions.go:123] node cpu capacity is 8
	I1205 20:00:46.370121  100448 node_conditions.go:105] duration metric: took 182.646059ms to run NodePressure ...
	I1205 20:00:46.370132  100448 start.go:228] waiting for startup goroutines ...
	I1205 20:00:46.370143  100448 start.go:233] waiting for cluster config update ...
	I1205 20:00:46.370162  100448 start.go:242] writing updated cluster config ...
	I1205 20:00:46.372782  100448 out.go:177] 
	I1205 20:00:46.374580  100448 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:00:46.374655  100448 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/config.json ...
	I1205 20:00:46.376472  100448 out.go:177] * Starting worker node multinode-340918-m02 in cluster multinode-340918
	I1205 20:00:46.377829  100448 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:00:46.379349  100448 out.go:177] * Pulling base image ...
	I1205 20:00:46.381007  100448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:00:46.381030  100448 cache.go:56] Caching tarball of preloaded images
	I1205 20:00:46.381037  100448 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 20:00:46.381139  100448 preload.go:174] Found /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:00:46.381155  100448 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:00:46.381230  100448 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/config.json ...
	I1205 20:00:46.397483  100448 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 20:00:46.397508  100448 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1205 20:00:46.397526  100448 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:00:46.397561  100448 start.go:365] acquiring machines lock for multinode-340918-m02: {Name:mk98cee47cf84b865cfc85623edf75fdc3a1c2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:00:46.397697  100448 start.go:369] acquired machines lock for "multinode-340918-m02" in 111.184µs
	I1205 20:00:46.397722  100448 start.go:93] Provisioning new machine with config: &{Name:multinode-340918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:00:46.397817  100448 start.go:125] createHost starting for "m02" (driver="docker")
	I1205 20:00:46.399959  100448 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1205 20:00:46.400064  100448 start.go:159] libmachine.API.Create for "multinode-340918" (driver="docker")
	I1205 20:00:46.400083  100448 client.go:168] LocalClient.Create starting
	I1205 20:00:46.400163  100448 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem
	I1205 20:00:46.400214  100448 main.go:141] libmachine: Decoding PEM data...
	I1205 20:00:46.400236  100448 main.go:141] libmachine: Parsing certificate...
	I1205 20:00:46.400304  100448 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem
	I1205 20:00:46.400334  100448 main.go:141] libmachine: Decoding PEM data...
	I1205 20:00:46.400348  100448 main.go:141] libmachine: Parsing certificate...
	I1205 20:00:46.400578  100448 cli_runner.go:164] Run: docker network inspect multinode-340918 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:00:46.417032  100448 network_create.go:77] Found existing network {name:multinode-340918 subnet:0xc002ff5b00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1205 20:00:46.417089  100448 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-340918-m02" container
	I1205 20:00:46.417144  100448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 20:00:46.432749  100448 cli_runner.go:164] Run: docker volume create multinode-340918-m02 --label name.minikube.sigs.k8s.io=multinode-340918-m02 --label created_by.minikube.sigs.k8s.io=true
	I1205 20:00:46.450024  100448 oci.go:103] Successfully created a docker volume multinode-340918-m02
	I1205 20:00:46.450116  100448 cli_runner.go:164] Run: docker run --rm --name multinode-340918-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-340918-m02 --entrypoint /usr/bin/test -v multinode-340918-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 20:00:46.935137  100448 oci.go:107] Successfully prepared a docker volume multinode-340918-m02
	I1205 20:00:46.935170  100448 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:00:46.935189  100448 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 20:00:46.935240  100448 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-340918-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 20:00:51.996604  100448 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-340918-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.061318244s)
	I1205 20:00:51.996636  100448 kic.go:203] duration metric: took 5.061445 seconds to extract preloaded images to volume
	W1205 20:00:51.996775  100448 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 20:00:51.996888  100448 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 20:00:52.046217  100448 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-340918-m02 --name multinode-340918-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-340918-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-340918-m02 --network multinode-340918 --ip 192.168.58.3 --volume multinode-340918-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:00:52.357193  100448 cli_runner.go:164] Run: docker container inspect multinode-340918-m02 --format={{.State.Running}}
	I1205 20:00:52.374840  100448 cli_runner.go:164] Run: docker container inspect multinode-340918-m02 --format={{.State.Status}}
	I1205 20:00:52.392953  100448 cli_runner.go:164] Run: docker exec multinode-340918-m02 stat /var/lib/dpkg/alternatives/iptables
	I1205 20:00:52.433704  100448 oci.go:144] the created container "multinode-340918-m02" has a running status.
	I1205 20:00:52.433737  100448 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa...
	I1205 20:00:52.537440  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 20:00:52.537482  100448 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 20:00:52.558717  100448 cli_runner.go:164] Run: docker container inspect multinode-340918-m02 --format={{.State.Status}}
	I1205 20:00:52.576804  100448 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 20:00:52.576825  100448 kic_runner.go:114] Args: [docker exec --privileged multinode-340918-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 20:00:52.634523  100448 cli_runner.go:164] Run: docker container inspect multinode-340918-m02 --format={{.State.Status}}
	I1205 20:00:52.652968  100448 machine.go:88] provisioning docker machine ...
	I1205 20:00:52.653000  100448 ubuntu.go:169] provisioning hostname "multinode-340918-m02"
	I1205 20:00:52.653060  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:52.676517  100448 main.go:141] libmachine: Using SSH client type: native
	I1205 20:00:52.676876  100448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1205 20:00:52.676892  100448 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340918-m02 && echo "multinode-340918-m02" | sudo tee /etc/hostname
	I1205 20:00:52.677485  100448 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38586->127.0.0.1:32852: read: connection reset by peer
	I1205 20:00:55.822682  100448 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340918-m02
	
	I1205 20:00:55.822771  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:55.839135  100448 main.go:141] libmachine: Using SSH client type: native
	I1205 20:00:55.839497  100448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1205 20:00:55.839524  100448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340918-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340918-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340918-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:00:55.972161  100448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:00:55.972213  100448 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 20:00:55.972235  100448 ubuntu.go:177] setting up certificates
	I1205 20:00:55.972249  100448 provision.go:83] configureAuth start
	I1205 20:00:55.972304  100448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918-m02
	I1205 20:00:55.988364  100448 provision.go:138] copyHostCerts
	I1205 20:00:55.988408  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 20:00:55.988440  100448 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem, removing ...
	I1205 20:00:55.988455  100448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 20:00:55.988535  100448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 20:00:55.988620  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 20:00:55.988645  100448 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem, removing ...
	I1205 20:00:55.988653  100448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 20:00:55.988687  100448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 20:00:55.988744  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 20:00:55.988767  100448 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem, removing ...
	I1205 20:00:55.988777  100448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 20:00:55.988810  100448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 20:00:55.988873  100448 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.multinode-340918-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-340918-m02]
	I1205 20:00:56.067120  100448 provision.go:172] copyRemoteCerts
	I1205 20:00:56.067176  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:00:56.067211  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:56.085055  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa Username:docker}
	I1205 20:00:56.180807  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:00:56.180872  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:00:56.203588  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:00:56.203674  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1205 20:00:56.226011  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:00:56.226068  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:00:56.248255  100448 provision.go:86] duration metric: configureAuth took 275.991174ms
	I1205 20:00:56.248284  100448 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:00:56.248465  100448 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:00:56.248557  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:56.267013  100448 main.go:141] libmachine: Using SSH client type: native
	I1205 20:00:56.273141  100448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1205 20:00:56.273168  100448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:00:56.492704  100448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:00:56.492741  100448 machine.go:91] provisioned docker machine in 3.839750575s
	I1205 20:00:56.492752  100448 client.go:171] LocalClient.Create took 10.092662843s
	I1205 20:00:56.492772  100448 start.go:167] duration metric: libmachine.API.Create for "multinode-340918" took 10.092708965s
	I1205 20:00:56.492784  100448 start.go:300] post-start starting for "multinode-340918-m02" (driver="docker")
	I1205 20:00:56.492792  100448 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:00:56.492851  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:00:56.492907  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:56.509827  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa Username:docker}
	I1205 20:00:56.605152  100448 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:00:56.608575  100448 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1205 20:00:56.608595  100448 command_runner.go:130] > NAME="Ubuntu"
	I1205 20:00:56.608600  100448 command_runner.go:130] > VERSION_ID="22.04"
	I1205 20:00:56.608606  100448 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1205 20:00:56.608615  100448 command_runner.go:130] > VERSION_CODENAME=jammy
	I1205 20:00:56.608619  100448 command_runner.go:130] > ID=ubuntu
	I1205 20:00:56.608623  100448 command_runner.go:130] > ID_LIKE=debian
	I1205 20:00:56.608628  100448 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1205 20:00:56.608633  100448 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1205 20:00:56.608640  100448 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1205 20:00:56.608656  100448 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1205 20:00:56.608662  100448 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1205 20:00:56.608709  100448 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:00:56.608731  100448 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:00:56.608742  100448 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:00:56.608748  100448 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 20:00:56.608760  100448 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 20:00:56.608810  100448 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 20:00:56.608873  100448 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> 128832.pem in /etc/ssl/certs
	I1205 20:00:56.608882  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> /etc/ssl/certs/128832.pem
	I1205 20:00:56.608966  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:00:56.617273  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /etc/ssl/certs/128832.pem (1708 bytes)
	I1205 20:00:56.639520  100448 start.go:303] post-start completed in 146.722496ms
	I1205 20:00:56.639861  100448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918-m02
	I1205 20:00:56.656667  100448 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/config.json ...
	I1205 20:00:56.656977  100448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:00:56.657030  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:56.674605  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa Username:docker}
	I1205 20:00:56.768886  100448 command_runner.go:130] > 24%!
	(MISSING)I1205 20:00:56.768946  100448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:00:56.772992  100448 command_runner.go:130] > 222G
	I1205 20:00:56.773160  100448 start.go:128] duration metric: createHost completed in 10.375327322s
	I1205 20:00:56.773184  100448 start.go:83] releasing machines lock for "multinode-340918-m02", held for 10.375474531s
	I1205 20:00:56.773257  100448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918-m02
	I1205 20:00:56.792346  100448 out.go:177] * Found network options:
	I1205 20:00:56.793919  100448 out.go:177]   - NO_PROXY=192.168.58.2
	W1205 20:00:56.795567  100448 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:00:56.795621  100448 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:00:56.795700  100448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:00:56.795751  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:56.795795  100448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:00:56.795858  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:00:56.813359  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa Username:docker}
	I1205 20:00:56.813575  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa Username:docker}
	I1205 20:00:57.038373  100448 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:00:57.038396  100448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:00:57.042481  100448 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1205 20:00:57.042511  100448 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1205 20:00:57.042521  100448 command_runner.go:130] > Device: b0h/176d	Inode: 539841      Links: 1
	I1205 20:00:57.042531  100448 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:00:57.042542  100448 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1205 20:00:57.042549  100448 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1205 20:00:57.042560  100448 command_runner.go:130] > Change: 2023-12-05 19:35:17.750849877 +0000
	I1205 20:00:57.042565  100448 command_runner.go:130] >  Birth: 2023-12-05 19:35:17.750849877 +0000
	I1205 20:00:57.042667  100448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:00:57.059779  100448 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:00:57.059854  100448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:00:57.086123  100448 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1205 20:00:57.086164  100448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 20:00:57.086173  100448 start.go:475] detecting cgroup driver to use...
	I1205 20:00:57.086216  100448 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:00:57.086358  100448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:00:57.100271  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:00:57.110402  100448 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:00:57.110450  100448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:00:57.123037  100448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:00:57.136206  100448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:00:57.209451  100448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:00:57.293153  100448 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 20:00:57.293198  100448 docker.go:219] disabling docker service ...
	I1205 20:00:57.293267  100448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:00:57.310186  100448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:00:57.320490  100448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:00:57.331248  100448 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 20:00:57.399182  100448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:00:57.409702  100448 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 20:00:57.478431  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:00:57.488616  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:00:57.502948  100448 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:00:57.502984  100448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:00:57.503052  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:00:57.511578  100448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:00:57.511642  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:00:57.520730  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:00:57.529470  100448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:00:57.538345  100448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:00:57.546569  100448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:00:57.553654  100448 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:00:57.554255  100448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:00:57.562067  100448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:00:57.636555  100448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:00:57.733608  100448 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:00:57.733669  100448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:00:57.737044  100448 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:00:57.737064  100448 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:00:57.737071  100448 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1205 20:00:57.737080  100448 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:00:57.737088  100448 command_runner.go:130] > Access: 2023-12-05 20:00:57.717146940 +0000
	I1205 20:00:57.737097  100448 command_runner.go:130] > Modify: 2023-12-05 20:00:57.717146940 +0000
	I1205 20:00:57.737110  100448 command_runner.go:130] > Change: 2023-12-05 20:00:57.717146940 +0000
	I1205 20:00:57.737119  100448 command_runner.go:130] >  Birth: -
	I1205 20:00:57.737138  100448 start.go:543] Will wait 60s for crictl version
	I1205 20:00:57.737183  100448 ssh_runner.go:195] Run: which crictl
	I1205 20:00:57.739956  100448 command_runner.go:130] > /usr/bin/crictl
	I1205 20:00:57.740094  100448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:00:57.769787  100448 command_runner.go:130] > Version:  0.1.0
	I1205 20:00:57.769807  100448 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:00:57.769813  100448 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1205 20:00:57.769821  100448 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:00:57.771667  100448 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 20:00:57.771752  100448 ssh_runner.go:195] Run: crio --version
	I1205 20:00:57.802557  100448 command_runner.go:130] > crio version 1.24.6
	I1205 20:00:57.802580  100448 command_runner.go:130] > Version:          1.24.6
	I1205 20:00:57.802604  100448 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 20:00:57.802611  100448 command_runner.go:130] > GitTreeState:     clean
	I1205 20:00:57.802621  100448 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 20:00:57.802628  100448 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 20:00:57.802646  100448 command_runner.go:130] > Compiler:         gc
	I1205 20:00:57.802662  100448 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:00:57.802676  100448 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:00:57.802686  100448 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:00:57.802692  100448 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:00:57.802697  100448 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:00:57.803837  100448 ssh_runner.go:195] Run: crio --version
	I1205 20:00:57.837294  100448 command_runner.go:130] > crio version 1.24.6
	I1205 20:00:57.837330  100448 command_runner.go:130] > Version:          1.24.6
	I1205 20:00:57.837340  100448 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 20:00:57.837346  100448 command_runner.go:130] > GitTreeState:     clean
	I1205 20:00:57.837359  100448 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 20:00:57.837366  100448 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 20:00:57.837374  100448 command_runner.go:130] > Compiler:         gc
	I1205 20:00:57.837381  100448 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:00:57.837400  100448 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:00:57.837417  100448 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:00:57.837428  100448 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:00:57.837437  100448 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:00:57.839689  100448 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 20:00:57.841365  100448 out.go:177]   - env NO_PROXY=192.168.58.2
	I1205 20:00:57.842923  100448 cli_runner.go:164] Run: docker network inspect multinode-340918 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:00:57.860313  100448 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1205 20:00:57.863967  100448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:00:57.874586  100448 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918 for IP: 192.168.58.3
	I1205 20:00:57.874617  100448 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fbd7b27250f9a01d87d327232e4afd0539a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:00:57.874778  100448 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key
	I1205 20:00:57.874825  100448 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key
	I1205 20:00:57.874843  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:00:57.874862  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:00:57.874880  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:00:57.874894  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:00:57.874965  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem (1338 bytes)
	W1205 20:00:57.875018  100448 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883_empty.pem, impossibly tiny 0 bytes
	I1205 20:00:57.875032  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:00:57.875064  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:00:57.875104  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:00:57.875141  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem (1679 bytes)
	I1205 20:00:57.875335  100448 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem (1708 bytes)
	I1205 20:00:57.875396  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> /usr/share/ca-certificates/128832.pem
	I1205 20:00:57.875417  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:00:57.875434  100448 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem -> /usr/share/ca-certificates/12883.pem
	I1205 20:00:57.875847  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:00:57.898473  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:00:57.920535  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:00:57.942148  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:00:57.964766  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /usr/share/ca-certificates/128832.pem (1708 bytes)
	I1205 20:00:57.987069  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:00:58.009340  100448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/12883.pem --> /usr/share/ca-certificates/12883.pem (1338 bytes)
	I1205 20:00:58.030879  100448 ssh_runner.go:195] Run: openssl version
	I1205 20:00:58.035798  100448 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1205 20:00:58.035875  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:00:58.044283  100448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:00:58.047418  100448 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:00:58.047456  100448 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:00:58.047500  100448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:00:58.053348  100448 command_runner.go:130] > b5213941
	I1205 20:00:58.053561  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:00:58.061787  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12883.pem && ln -fs /usr/share/ca-certificates/12883.pem /etc/ssl/certs/12883.pem"
	I1205 20:00:58.070339  100448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12883.pem
	I1205 20:00:58.073376  100448 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:46 /usr/share/ca-certificates/12883.pem
	I1205 20:00:58.073413  100448 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:46 /usr/share/ca-certificates/12883.pem
	I1205 20:00:58.073455  100448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12883.pem
	I1205 20:00:58.079317  100448 command_runner.go:130] > 51391683
	I1205 20:00:58.079532  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12883.pem /etc/ssl/certs/51391683.0"
	I1205 20:00:58.088149  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128832.pem && ln -fs /usr/share/ca-certificates/128832.pem /etc/ssl/certs/128832.pem"
	I1205 20:00:58.096756  100448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128832.pem
	I1205 20:00:58.099850  100448 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:46 /usr/share/ca-certificates/128832.pem
	I1205 20:00:58.099876  100448 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:46 /usr/share/ca-certificates/128832.pem
	I1205 20:00:58.099923  100448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128832.pem
	I1205 20:00:58.105832  100448 command_runner.go:130] > 3ec20f2e
	I1205 20:00:58.106049  100448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128832.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:00:58.114748  100448 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:00:58.117878  100448 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:00:58.117919  100448 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:00:58.117997  100448 ssh_runner.go:195] Run: crio config
	I1205 20:00:58.154610  100448 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:00:58.154642  100448 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:00:58.154654  100448 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:00:58.154665  100448 command_runner.go:130] > #
	I1205 20:00:58.154677  100448 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:00:58.154691  100448 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:00:58.154721  100448 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:00:58.154744  100448 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:00:58.154755  100448 command_runner.go:130] > # reload'.
	I1205 20:00:58.154783  100448 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:00:58.154796  100448 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:00:58.154806  100448 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:00:58.154812  100448 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:00:58.154820  100448 command_runner.go:130] > [crio]
	I1205 20:00:58.154831  100448 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:00:58.154843  100448 command_runner.go:130] > # containers images, in this directory.
	I1205 20:00:58.154861  100448 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 20:00:58.154876  100448 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:00:58.154885  100448 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1205 20:00:58.154896  100448 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:00:58.154907  100448 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:00:58.154914  100448 command_runner.go:130] > # storage_driver = "vfs"
	I1205 20:00:58.154927  100448 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:00:58.154936  100448 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:00:58.154959  100448 command_runner.go:130] > # storage_option = [
	I1205 20:00:58.154970  100448 command_runner.go:130] > # ]
	I1205 20:00:58.154980  100448 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:00:58.154995  100448 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:00:58.155002  100448 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:00:58.155017  100448 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:00:58.155031  100448 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:00:58.155042  100448 command_runner.go:130] > # always happen on a node reboot
	I1205 20:00:58.155053  100448 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:00:58.155062  100448 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:00:58.155078  100448 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:00:58.155122  100448 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:00:58.155140  100448 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:00:58.155152  100448 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:00:58.155165  100448 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:00:58.155176  100448 command_runner.go:130] > # internal_wipe = true
	I1205 20:00:58.155185  100448 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:00:58.155199  100448 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:00:58.155231  100448 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:00:58.155249  100448 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:00:58.155262  100448 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:00:58.155277  100448 command_runner.go:130] > [crio.api]
	I1205 20:00:58.155287  100448 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:00:58.155296  100448 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:00:58.155306  100448 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:00:58.155316  100448 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:00:58.155326  100448 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:00:58.155337  100448 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:00:58.155347  100448 command_runner.go:130] > # stream_port = "0"
	I1205 20:00:58.155356  100448 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:00:58.155363  100448 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:00:58.155376  100448 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:00:58.155387  100448 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:00:58.155397  100448 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:00:58.155410  100448 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:00:58.155417  100448 command_runner.go:130] > # minutes.
	I1205 20:00:58.155425  100448 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:00:58.155440  100448 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:00:58.155453  100448 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:00:58.155464  100448 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:00:58.155473  100448 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:00:58.155487  100448 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:00:58.155499  100448 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:00:58.155548  100448 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:00:58.155562  100448 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:00:58.155570  100448 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 20:00:58.155585  100448 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:00:58.155594  100448 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 20:00:58.155628  100448 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:00:58.155637  100448 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:00:58.155643  100448 command_runner.go:130] > [crio.runtime]
	I1205 20:00:58.155653  100448 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:00:58.155669  100448 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:00:58.155675  100448 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:00:58.155690  100448 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:00:58.155704  100448 command_runner.go:130] > # default_ulimits = [
	I1205 20:00:58.155710  100448 command_runner.go:130] > # ]
	I1205 20:00:58.155721  100448 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:00:58.155731  100448 command_runner.go:130] > # no_pivot = false
	I1205 20:00:58.155740  100448 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:00:58.155752  100448 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:00:58.155761  100448 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:00:58.155770  100448 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:00:58.155780  100448 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:00:58.155795  100448 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:00:58.155805  100448 command_runner.go:130] > # conmon = ""
	I1205 20:00:58.155813  100448 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:00:58.155830  100448 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:00:58.155841  100448 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:00:58.155852  100448 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:00:58.155864  100448 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:00:58.155877  100448 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:00:58.155890  100448 command_runner.go:130] > # conmon_env = [
	I1205 20:00:58.155899  100448 command_runner.go:130] > # ]
	I1205 20:00:58.155910  100448 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:00:58.155921  100448 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:00:58.155936  100448 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:00:58.155945  100448 command_runner.go:130] > # default_env = [
	I1205 20:00:58.155951  100448 command_runner.go:130] > # ]
	I1205 20:00:58.155966  100448 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:00:58.155971  100448 command_runner.go:130] > # selinux = false
	I1205 20:00:58.155980  100448 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:00:58.156015  100448 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:00:58.156034  100448 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:00:58.156043  100448 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:00:58.156057  100448 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:00:58.156074  100448 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:00:58.156092  100448 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:00:58.156105  100448 command_runner.go:130] > # which might increase security.
	I1205 20:00:58.156116  100448 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1205 20:00:58.156135  100448 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:00:58.156149  100448 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:00:58.156159  100448 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:00:58.156171  100448 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:00:58.156179  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:00:58.156184  100448 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:00:58.156192  100448 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:00:58.156208  100448 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:00:58.156218  100448 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:00:58.156228  100448 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:00:58.156237  100448 command_runner.go:130] > # irqbalance daemon.
	I1205 20:00:58.156245  100448 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:00:58.156257  100448 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:00:58.156270  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:00:58.156279  100448 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:00:58.156291  100448 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:00:58.156301  100448 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:00:58.156314  100448 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:00:58.156328  100448 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:00:58.156342  100448 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:00:58.156356  100448 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:00:58.156366  100448 command_runner.go:130] > # will be added.
	I1205 20:00:58.156373  100448 command_runner.go:130] > # default_capabilities = [
	I1205 20:00:58.156383  100448 command_runner.go:130] > # 	"CHOWN",
	I1205 20:00:58.156390  100448 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:00:58.156399  100448 command_runner.go:130] > # 	"FSETID",
	I1205 20:00:58.156405  100448 command_runner.go:130] > # 	"FOWNER",
	I1205 20:00:58.156415  100448 command_runner.go:130] > # 	"SETGID",
	I1205 20:00:58.156422  100448 command_runner.go:130] > # 	"SETUID",
	I1205 20:00:58.156432  100448 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:00:58.156439  100448 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:00:58.156448  100448 command_runner.go:130] > # 	"KILL",
	I1205 20:00:58.156454  100448 command_runner.go:130] > # ]
	I1205 20:00:58.156470  100448 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 20:00:58.156484  100448 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 20:00:58.156495  100448 command_runner.go:130] > # add_inheritable_capabilities = true
	I1205 20:00:58.156510  100448 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:00:58.156560  100448 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:00:58.156570  100448 command_runner.go:130] > # default_sysctls = [
	I1205 20:00:58.156576  100448 command_runner.go:130] > # ]
	I1205 20:00:58.156587  100448 command_runner.go:130] > # List of devices on the host that a
	I1205 20:00:58.156599  100448 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:00:58.156609  100448 command_runner.go:130] > # allowed_devices = [
	I1205 20:00:58.156616  100448 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:00:58.156624  100448 command_runner.go:130] > # ]
	I1205 20:00:58.156633  100448 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:00:58.156680  100448 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:00:58.156693  100448 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:00:58.156708  100448 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:00:58.156719  100448 command_runner.go:130] > # additional_devices = [
	I1205 20:00:58.156727  100448 command_runner.go:130] > # ]
	I1205 20:00:58.156739  100448 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:00:58.156748  100448 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:00:58.156758  100448 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:00:58.156771  100448 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:00:58.156780  100448 command_runner.go:130] > # ]
	I1205 20:00:58.156790  100448 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:00:58.156802  100448 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:00:58.156811  100448 command_runner.go:130] > # Defaults to false.
	I1205 20:00:58.156819  100448 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:00:58.156830  100448 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:00:58.156841  100448 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:00:58.156850  100448 command_runner.go:130] > # hooks_dir = [
	I1205 20:00:58.156857  100448 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:00:58.156865  100448 command_runner.go:130] > # ]
	I1205 20:00:58.156874  100448 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:00:58.156885  100448 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:00:58.156896  100448 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:00:58.156903  100448 command_runner.go:130] > #
	I1205 20:00:58.156912  100448 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:00:58.156924  100448 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:00:58.156935  100448 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:00:58.156947  100448 command_runner.go:130] > #
	I1205 20:00:58.156959  100448 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:00:58.156971  100448 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:00:58.156986  100448 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:00:58.156996  100448 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:00:58.157004  100448 command_runner.go:130] > #
	I1205 20:00:58.157011  100448 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:00:58.157022  100448 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:00:58.157038  100448 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:00:58.157047  100448 command_runner.go:130] > # pids_limit = 0
	I1205 20:00:58.157056  100448 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:00:58.157068  100448 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:00:58.157080  100448 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:00:58.157092  100448 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:00:58.157101  100448 command_runner.go:130] > # log_size_max = -1
	I1205 20:00:58.157112  100448 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:00:58.157121  100448 command_runner.go:130] > # log_to_journald = false
	I1205 20:00:58.157133  100448 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:00:58.157146  100448 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:00:58.157157  100448 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:00:58.157169  100448 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:00:58.157180  100448 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:00:58.157189  100448 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:00:58.157202  100448 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:00:58.157211  100448 command_runner.go:130] > # read_only = false
	I1205 20:00:58.157223  100448 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:00:58.157236  100448 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:00:58.157247  100448 command_runner.go:130] > # live configuration reload.
	I1205 20:00:58.157256  100448 command_runner.go:130] > # log_level = "info"
	I1205 20:00:58.157268  100448 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:00:58.157279  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:00:58.157289  100448 command_runner.go:130] > # log_filter = ""
	I1205 20:00:58.157301  100448 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:00:58.157314  100448 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:00:58.157324  100448 command_runner.go:130] > # separated by comma.
	I1205 20:00:58.157332  100448 command_runner.go:130] > # uid_mappings = ""
	I1205 20:00:58.157348  100448 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:00:58.157362  100448 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:00:58.157372  100448 command_runner.go:130] > # separated by comma.
	I1205 20:00:58.157386  100448 command_runner.go:130] > # gid_mappings = ""
	I1205 20:00:58.157399  100448 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:00:58.157412  100448 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:00:58.157425  100448 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:00:58.157435  100448 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:00:58.157447  100448 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:00:58.157457  100448 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:00:58.157469  100448 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:00:58.157478  100448 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:00:58.157519  100448 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:00:58.157532  100448 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:00:58.157540  100448 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:00:58.157549  100448 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:00:58.157558  100448 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:00:58.157568  100448 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:00:58.157582  100448 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:00:58.157592  100448 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:00:58.157601  100448 command_runner.go:130] > # drop_infra_ctr = true
	I1205 20:00:58.157610  100448 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:00:58.157622  100448 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:00:58.157637  100448 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:00:58.157646  100448 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:00:58.157662  100448 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:00:58.157673  100448 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:00:58.157682  100448 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:00:58.157695  100448 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:00:58.157704  100448 command_runner.go:130] > # pinns_path = ""
	I1205 20:00:58.157717  100448 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:00:58.157730  100448 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:00:58.157743  100448 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:00:58.157753  100448 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:00:58.157765  100448 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:00:58.157780  100448 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:00:58.157801  100448 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:00:58.157812  100448 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:00:58.157826  100448 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:00:58.157836  100448 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:00:58.157843  100448 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:00:58.157851  100448 command_runner.go:130] > # ]
	I1205 20:00:58.157861  100448 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:00:58.157875  100448 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:00:58.157893  100448 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:00:58.157906  100448 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:00:58.157916  100448 command_runner.go:130] > #
	I1205 20:00:58.157927  100448 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:00:58.157937  100448 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:00:58.157946  100448 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:00:58.157956  100448 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:00:58.157966  100448 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:00:58.157975  100448 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:00:58.157983  100448 command_runner.go:130] > # Where:
	I1205 20:00:58.157995  100448 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:00:58.158007  100448 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:00:58.158020  100448 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:00:58.158031  100448 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:00:58.158039  100448 command_runner.go:130] > #   in $PATH.
	I1205 20:00:58.158050  100448 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:00:58.158060  100448 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:00:58.158073  100448 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:00:58.158082  100448 command_runner.go:130] > #   state.
	I1205 20:00:58.158096  100448 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:00:58.158109  100448 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:00:58.158120  100448 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:00:58.158131  100448 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:00:58.158142  100448 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:00:58.158153  100448 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:00:58.158160  100448 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:00:58.158173  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:00:58.158186  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:00:58.158202  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:00:58.158215  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:00:58.158229  100448 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:00:58.158243  100448 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:00:58.158261  100448 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:00:58.158275  100448 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:00:58.158313  100448 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:00:58.158320  100448 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:00:58.158325  100448 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1205 20:00:58.158332  100448 command_runner.go:130] > runtime_type = "oci"
	I1205 20:00:58.158337  100448 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:00:58.158345  100448 command_runner.go:130] > runtime_config_path = ""
	I1205 20:00:58.158349  100448 command_runner.go:130] > monitor_path = ""
	I1205 20:00:58.158355  100448 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:00:58.158361  100448 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:00:58.158423  100448 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:00:58.158436  100448 command_runner.go:130] > # running containers
	I1205 20:00:58.158443  100448 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:00:58.158461  100448 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:00:58.158475  100448 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:00:58.158485  100448 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:00:58.158496  100448 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:00:58.158505  100448 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:00:58.158520  100448 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:00:58.158529  100448 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:00:58.158534  100448 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:00:58.158542  100448 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:00:58.158548  100448 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:00:58.158556  100448 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:00:58.158562  100448 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:00:58.158572  100448 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:00:58.158579  100448 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:00:58.158587  100448 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:00:58.158597  100448 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:00:58.158606  100448 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:00:58.158612  100448 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:00:58.158624  100448 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:00:58.158627  100448 command_runner.go:130] > # Example:
	I1205 20:00:58.158632  100448 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:00:58.158639  100448 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:00:58.158644  100448 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:00:58.158651  100448 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:00:58.158660  100448 command_runner.go:130] > # cpuset = 0
	I1205 20:00:58.158666  100448 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:00:58.158669  100448 command_runner.go:130] > # Where:
	I1205 20:00:58.158674  100448 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:00:58.158681  100448 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:00:58.158688  100448 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:00:58.158694  100448 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:00:58.158704  100448 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:00:58.158712  100448 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:00:58.158715  100448 command_runner.go:130] > # 
	I1205 20:00:58.158723  100448 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:00:58.158726  100448 command_runner.go:130] > #
	I1205 20:00:58.158734  100448 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:00:58.158743  100448 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:00:58.158749  100448 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:00:58.158757  100448 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:00:58.158765  100448 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:00:58.158771  100448 command_runner.go:130] > [crio.image]
	I1205 20:00:58.158777  100448 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:00:58.158784  100448 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:00:58.158790  100448 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:00:58.158798  100448 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:00:58.158804  100448 command_runner.go:130] > # global_auth_file = ""
	I1205 20:00:58.158810  100448 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:00:58.158817  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:00:58.158821  100448 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:00:58.158830  100448 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:00:58.158838  100448 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:00:58.158845  100448 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:00:58.158850  100448 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:00:58.158860  100448 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:00:58.158868  100448 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:00:58.158875  100448 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:00:58.158882  100448 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:00:58.158889  100448 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:00:58.158895  100448 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:00:58.158909  100448 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:00:58.158917  100448 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:00:58.158925  100448 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:00:58.158932  100448 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:00:58.158936  100448 command_runner.go:130] > # signature_policy = ""
	I1205 20:00:58.158945  100448 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:00:58.158954  100448 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:00:58.158958  100448 command_runner.go:130] > # changing them here.
	I1205 20:00:58.158964  100448 command_runner.go:130] > # insecure_registries = [
	I1205 20:00:58.158968  100448 command_runner.go:130] > # ]
	I1205 20:00:58.158976  100448 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:00:58.158981  100448 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:00:58.158989  100448 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:00:58.158997  100448 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:00:58.159001  100448 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:00:58.159009  100448 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:00:58.159015  100448 command_runner.go:130] > # CNI plugins.
	I1205 20:00:58.159019  100448 command_runner.go:130] > [crio.network]
	I1205 20:00:58.159027  100448 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:00:58.159035  100448 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:00:58.159039  100448 command_runner.go:130] > # cni_default_network = ""
	I1205 20:00:58.159047  100448 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:00:58.159054  100448 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:00:58.159060  100448 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:00:58.159065  100448 command_runner.go:130] > # plugin_dirs = [
	I1205 20:00:58.159069  100448 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:00:58.159075  100448 command_runner.go:130] > # ]
	I1205 20:00:58.159081  100448 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:00:58.159087  100448 command_runner.go:130] > [crio.metrics]
	I1205 20:00:58.159092  100448 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:00:58.159101  100448 command_runner.go:130] > # enable_metrics = false
	I1205 20:00:58.159108  100448 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:00:58.159113  100448 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:00:58.159121  100448 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:00:58.159127  100448 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:00:58.159135  100448 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:00:58.159143  100448 command_runner.go:130] > # metrics_collectors = [
	I1205 20:00:58.159147  100448 command_runner.go:130] > # 	"operations",
	I1205 20:00:58.159154  100448 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:00:58.159159  100448 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:00:58.159168  100448 command_runner.go:130] > # 	"operations_errors",
	I1205 20:00:58.159175  100448 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:00:58.159179  100448 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:00:58.159186  100448 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:00:58.159190  100448 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:00:58.159197  100448 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:00:58.159201  100448 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:00:58.159207  100448 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:00:58.159214  100448 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:00:58.159220  100448 command_runner.go:130] > # 	"containers_oom",
	I1205 20:00:58.159224  100448 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:00:58.159230  100448 command_runner.go:130] > # 	"operations_total",
	I1205 20:00:58.159235  100448 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:00:58.159241  100448 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:00:58.159246  100448 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:00:58.159250  100448 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:00:58.159257  100448 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:00:58.159262  100448 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:00:58.159268  100448 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:00:58.159273  100448 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:00:58.159279  100448 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:00:58.159283  100448 command_runner.go:130] > # ]
	I1205 20:00:58.159289  100448 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:00:58.159295  100448 command_runner.go:130] > # metrics_port = 9090
	I1205 20:00:58.159300  100448 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:00:58.159307  100448 command_runner.go:130] > # metrics_socket = ""
	I1205 20:00:58.159327  100448 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:00:58.159340  100448 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:00:58.159348  100448 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:00:58.159355  100448 command_runner.go:130] > # certificate on any modification event.
	I1205 20:00:58.159360  100448 command_runner.go:130] > # metrics_cert = ""
	I1205 20:00:58.159367  100448 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:00:58.159372  100448 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:00:58.159380  100448 command_runner.go:130] > # metrics_key = ""
	I1205 20:00:58.159386  100448 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:00:58.159392  100448 command_runner.go:130] > [crio.tracing]
	I1205 20:00:58.159397  100448 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:00:58.159404  100448 command_runner.go:130] > # enable_tracing = false
	I1205 20:00:58.159409  100448 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:00:58.159416  100448 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:00:58.159421  100448 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:00:58.159428  100448 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:00:58.159434  100448 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:00:58.159439  100448 command_runner.go:130] > [crio.stats]
	I1205 20:00:58.159447  100448 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:00:58.159455  100448 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:00:58.159462  100448 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:00:58.159490  100448 command_runner.go:130] ! time="2023-12-05 20:00:58.152473309Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1205 20:00:58.159503  100448 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:00:58.159570  100448 cni.go:84] Creating CNI manager for ""
	I1205 20:00:58.159578  100448 cni.go:136] 2 nodes found, recommending kindnet
	I1205 20:00:58.159591  100448 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:00:58.159611  100448 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340918 NodeName:multinode-340918-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:00:58.159735  100448 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340918-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:00:58.159795  100448 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-340918-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:00:58.159847  100448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:00:58.167321  100448 command_runner.go:130] > kubeadm
	I1205 20:00:58.167343  100448 command_runner.go:130] > kubectl
	I1205 20:00:58.167350  100448 command_runner.go:130] > kubelet
	I1205 20:00:58.167916  100448 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:00:58.167980  100448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1205 20:00:58.175627  100448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:00:58.190964  100448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:00:58.206652  100448 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1205 20:00:58.209721  100448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:00:58.219442  100448 host.go:66] Checking if "multinode-340918" exists ...
	I1205 20:00:58.219717  100448 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:00:58.219698  100448 start.go:304] JoinCluster: &{Name:multinode-340918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340918 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:00:58.219767  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:00:58.219815  100448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 20:00:58.236358  100448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 20:00:58.379280  100448 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xlg7z2.mrkfa62jmyryvkq5 --discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de 
	I1205 20:00:58.383878  100448 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:00:58.383932  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xlg7z2.mrkfa62jmyryvkq5 --discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-340918-m02"
	I1205 20:00:58.417024  100448 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:00:58.444538  100448 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1205 20:00:58.444562  100448 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1205 20:00:58.444570  100448 command_runner.go:130] > OS: Linux
	I1205 20:00:58.444578  100448 command_runner.go:130] > CGROUPS_CPU: enabled
	I1205 20:00:58.444586  100448 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1205 20:00:58.444593  100448 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1205 20:00:58.444600  100448 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1205 20:00:58.444608  100448 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1205 20:00:58.444616  100448 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1205 20:00:58.444631  100448 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1205 20:00:58.444644  100448 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1205 20:00:58.444655  100448 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1205 20:00:58.520915  100448 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1205 20:00:58.520943  100448 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1205 20:00:58.544960  100448 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:00:58.545009  100448 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:00:58.545019  100448 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:00:58.615931  100448 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1205 20:01:00.631431  100448 command_runner.go:130] > This node has joined the cluster:
	I1205 20:01:00.631460  100448 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1205 20:01:00.631481  100448 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1205 20:01:00.631497  100448 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1205 20:01:00.633950  100448 command_runner.go:130] ! W1205 20:00:58.416515    1117 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1205 20:01:00.633974  100448 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1205 20:01:00.633997  100448 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:01:00.634021  100448 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xlg7z2.mrkfa62jmyryvkq5 --discovery-token-ca-cert-hash sha256:f61b399cb6776d724c7cf1a9a4fb9913cb1ff908aabc5bdeeadc4488475094de --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-340918-m02": (2.250068215s)
	I1205 20:01:00.634043  100448 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:01:00.793543  100448 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1205 20:01:00.793632  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-340918 minikube.k8s.io/updated_at=2023_12_05T20_01_00_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:01:00.862778  100448 command_runner.go:130] > node/multinode-340918-m02 labeled
	I1205 20:01:00.865407  100448 start.go:306] JoinCluster complete in 2.645704691s
	I1205 20:01:00.865435  100448 cni.go:84] Creating CNI manager for ""
	I1205 20:01:00.865441  100448 cni.go:136] 2 nodes found, recommending kindnet
	I1205 20:01:00.865486  100448 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:01:00.868807  100448 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:01:00.868836  100448 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1205 20:01:00.868848  100448 command_runner.go:130] > Device: 36h/54d	Inode: 547389      Links: 1
	I1205 20:01:00.868859  100448 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:01:00.868887  100448 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1205 20:01:00.868896  100448 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1205 20:01:00.868901  100448 command_runner.go:130] > Change: 2023-12-05 19:35:18.154877769 +0000
	I1205 20:01:00.868908  100448 command_runner.go:130] >  Birth: 2023-12-05 19:35:18.130876112 +0000
	I1205 20:01:00.868955  100448 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:01:00.868968  100448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:01:00.884370  100448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:01:01.113868  100448 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:01:01.113890  100448 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:01:01.113897  100448 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1205 20:01:01.113901  100448 command_runner.go:130] > daemonset.apps/kindnet configured
	I1205 20:01:01.114310  100448 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:01:01.114543  100448 kapi.go:59] client config for multinode-340918: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:01:01.114842  100448 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:01:01.114855  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:01.114863  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:01.114869  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:01.116805  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:01.116824  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:01.116831  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:01.116837  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:01.116843  100448 round_trippers.go:580]     Content-Length: 291
	I1205 20:01:01.116848  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:01 GMT
	I1205 20:01:01.116853  100448 round_trippers.go:580]     Audit-Id: d894b6f0-ca9e-47dd-88ba-82bcdc6afc3b
	I1205 20:01:01.116858  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:01.116867  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:01.116886  100448 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"49f4db43-89bb-40a9-adb1-a6e95567806b","resourceVersion":"409","creationTimestamp":"2023-12-05T19:59:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1205 20:01:01.116977  100448 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340918" context rescaled to 1 replicas
	I1205 20:01:01.117011  100448 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:01:01.120246  100448 out.go:177] * Verifying Kubernetes components...
	I1205 20:01:01.121788  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:01:01.133005  100448 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:01:01.133257  100448 kapi.go:59] client config for multinode-340918: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/profiles/multinode-340918/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:01:01.133480  100448 node_ready.go:35] waiting up to 6m0s for node "multinode-340918-m02" to be "Ready" ...
	I1205 20:01:01.133542  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:01.133549  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:01.133563  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:01.133571  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:01.135653  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:01.135678  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:01.135686  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:01 GMT
	I1205 20:01:01.135695  100448 round_trippers.go:580]     Audit-Id: 10b17e14-3d32-4307-9781-93ef85671dcd
	I1205 20:01:01.135703  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:01.135712  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:01.135720  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:01.135729  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:01.136282  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918-m02","uid":"5fdb3a80-e991-423d-a8a0-48acb5136963","resourceVersion":"451","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_01_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1205 20:01:01.136928  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:01.136947  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:01.136964  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:01.136973  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:01.139253  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:01.139279  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:01.139289  100448 round_trippers.go:580]     Audit-Id: b76ac344-eecc-4292-9824-160f9768a7f1
	I1205 20:01:01.139297  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:01.139306  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:01.139315  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:01.139324  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:01.139336  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:01 GMT
	I1205 20:01:01.139469  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918-m02","uid":"5fdb3a80-e991-423d-a8a0-48acb5136963","resourceVersion":"451","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_01_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1205 20:01:01.640489  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:01.640513  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:01.640524  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:01.640531  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:01.642827  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:01.642850  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:01.642858  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:01.642865  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:01.642873  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:01.642881  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:01.642888  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:01 GMT
	I1205 20:01:01.642899  100448 round_trippers.go:580]     Audit-Id: 0a268b4a-9198-438f-bb04-716330f8189b
	I1205 20:01:01.643091  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918-m02","uid":"5fdb3a80-e991-423d-a8a0-48acb5136963","resourceVersion":"451","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_01_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1205 20:01:02.140639  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:02.140663  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.140671  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.140677  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.143020  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:02.143043  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.143060  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.143070  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.143079  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.143095  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.143103  100448 round_trippers.go:580]     Audit-Id: ae51d747-8789-4e44-b9b8-aff56c788da6
	I1205 20:01:02.143115  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.143229  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918-m02","uid":"5fdb3a80-e991-423d-a8a0-48acb5136963","resourceVersion":"451","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_01_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1205 20:01:02.640830  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:02.640854  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.640862  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.640869  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.643002  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:02.643020  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.643027  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.643032  100448 round_trippers.go:580]     Audit-Id: da281971-a21e-40b6-aa02-88abd8d16cd6
	I1205 20:01:02.643037  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.643045  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.643060  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.643072  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.643236  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918-m02","uid":"5fdb3a80-e991-423d-a8a0-48acb5136963","resourceVersion":"466","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_01_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I1205 20:01:02.643536  100448 node_ready.go:49] node "multinode-340918-m02" has status "Ready":"True"
	I1205 20:01:02.643551  100448 node_ready.go:38] duration metric: took 1.510056037s waiting for node "multinode-340918-m02" to be "Ready" ...
	I1205 20:01:02.643561  100448 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:01:02.643628  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:01:02.643638  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.643645  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.643651  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.646611  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:02.646639  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.646647  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.646656  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.646667  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.646676  100448 round_trippers.go:580]     Audit-Id: fdc8e2cc-c6f0-4d7e-9a51-e1985917e848
	I1205 20:01:02.646685  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.646693  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.647213  100448 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"466"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"405","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1205 20:01:02.649244  100448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-skz8t" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.649310  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skz8t
	I1205 20:01:02.649318  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.649325  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.649334  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.651179  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.651198  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.651208  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.651217  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.651229  100448 round_trippers.go:580]     Audit-Id: 96d86348-a6ad-4043-a6e2-fbf6ab311ff9
	I1205 20:01:02.651237  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.651250  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.651259  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.651356  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skz8t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe","resourceVersion":"405","creationTimestamp":"2023-12-05T20:00:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2978b9fb-1935-4f3d-b677-394358d51e00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978b9fb-1935-4f3d-b677-394358d51e00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:01:02.651735  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:02.651746  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.651753  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.651759  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.653400  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.653416  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.653422  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.653427  100448 round_trippers.go:580]     Audit-Id: d03847e7-5da8-4128-ac0d-74ce7b7b7716
	I1205 20:01:02.653433  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.653437  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.653445  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.653453  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.653608  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:01:02.653879  100448 pod_ready.go:92] pod "coredns-5dd5756b68-skz8t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:02.653892  100448 pod_ready.go:81] duration metric: took 4.629093ms waiting for pod "coredns-5dd5756b68-skz8t" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.653900  100448 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.653941  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340918
	I1205 20:01:02.653948  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.653955  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.653960  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.655657  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.655671  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.655677  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.655683  100448 round_trippers.go:580]     Audit-Id: 2b936b18-18cb-4051-ab7b-4e653e48459e
	I1205 20:01:02.655688  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.655692  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.655699  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.655711  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.655848  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340918","namespace":"kube-system","uid":"60f35cfd-060d-4cda-b6a6-f5ee1936b68d","resourceVersion":"295","creationTimestamp":"2023-12-05T19:59:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"807a07541f985b83a4185dbe9a49fec6","kubernetes.io/config.mirror":"807a07541f985b83a4185dbe9a49fec6","kubernetes.io/config.seen":"2023-12-05T19:59:59.280246964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:01:02.656183  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:02.656210  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.656221  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.656231  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.657688  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.657705  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.657714  100448 round_trippers.go:580]     Audit-Id: f021a3ea-2c7e-4291-8fc0-ec95b9e7c091
	I1205 20:01:02.657721  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.657729  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.657738  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.657748  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.657760  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.657898  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:01:02.658183  100448 pod_ready.go:92] pod "etcd-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:02.658196  100448 pod_ready.go:81] duration metric: took 4.291972ms waiting for pod "etcd-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.658211  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.658253  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340918
	I1205 20:01:02.658261  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.658267  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.658273  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.659890  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.659904  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.659914  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.659922  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.659937  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.659945  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.659953  100448 round_trippers.go:580]     Audit-Id: 47502200-25c1-4e91-9b9c-6637b2f8917c
	I1205 20:01:02.659965  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.660087  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340918","namespace":"kube-system","uid":"c5e52362-7444-45f8-8dcf-0ceeb08f7f88","resourceVersion":"293","creationTimestamp":"2023-12-05T19:59:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"88af118c95fa254a23ecebf6b6604eb4","kubernetes.io/config.mirror":"88af118c95fa254a23ecebf6b6604eb4","kubernetes.io/config.seen":"2023-12-05T19:59:59.280238446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1205 20:01:02.660480  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:02.660494  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.660504  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.660512  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.662033  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.662047  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.662057  100448 round_trippers.go:580]     Audit-Id: 2abf2ba8-ef72-4ba7-bc60-5458a63af7e3
	I1205 20:01:02.662064  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.662072  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.662080  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.662088  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.662098  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.662216  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:01:02.662526  100448 pod_ready.go:92] pod "kube-apiserver-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:02.662540  100448 pod_ready.go:81] duration metric: took 4.321215ms waiting for pod "kube-apiserver-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.662548  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.662609  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340918
	I1205 20:01:02.662618  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.662624  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.662631  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.664139  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.664155  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.664164  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.664172  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.664181  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.664190  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.664217  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.664232  100448 round_trippers.go:580]     Audit-Id: fdf004d7-f730-45cb-ab99-6b7419eef965
	I1205 20:01:02.664341  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340918","namespace":"kube-system","uid":"dc52ff14-6ae5-43bd-b80f-774a5fae4fb3","resourceVersion":"275","creationTimestamp":"2023-12-05T19:59:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fe502b0fa0c4a310ff725f7f4d82494e","kubernetes.io/config.mirror":"fe502b0fa0c4a310ff725f7f4d82494e","kubernetes.io/config.seen":"2023-12-05T19:59:53.273068058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1205 20:01:02.664695  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:02.664708  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.664718  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.664726  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.666184  100448 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:01:02.666198  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.666204  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.666209  100448 round_trippers.go:580]     Audit-Id: 9786bc11-6902-4710-950e-799de9e60082
	I1205 20:01:02.666215  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.666220  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.666225  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.666230  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.666331  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:01:02.666594  100448 pod_ready.go:92] pod "kube-controller-manager-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:02.666609  100448 pod_ready.go:81] duration metric: took 4.035374ms waiting for pod "kube-controller-manager-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.666621  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kgt9d" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:02.840957  100448 request.go:629] Waited for 174.280895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgt9d
	I1205 20:01:02.841028  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgt9d
	I1205 20:01:02.841039  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:02.841047  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:02.841054  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:02.843414  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:02.843472  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:02.843497  100448 round_trippers.go:580]     Audit-Id: c2bf946b-7fa2-4cb3-8e85-f85fac513e84
	I1205 20:01:02.843511  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:02.843520  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:02.843534  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:02.843544  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:02.843558  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:02 GMT
	I1205 20:01:02.843701  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kgt9d","generateName":"kube-proxy-","namespace":"kube-system","uid":"093e634d-1302-423d-82ce-ee5d6a6fb9d9","resourceVersion":"461","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b4f45069-b28b-42ec-8716-60005d2e7302","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b4f45069-b28b-42ec-8716-60005d2e7302\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1205 20:01:03.041025  100448 request.go:629] Waited for 196.909362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:03.041095  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918-m02
	I1205 20:01:03.041102  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:03.041113  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:03.041123  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:03.043377  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:03.043403  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:03.043419  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:03.043429  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:03.043440  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:03.043449  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:03 GMT
	I1205 20:01:03.043461  100448 round_trippers.go:580]     Audit-Id: ef7c1480-84e9-4b88-b687-3833639b283f
	I1205 20:01:03.043469  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:03.043610  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918-m02","uid":"5fdb3a80-e991-423d-a8a0-48acb5136963","resourceVersion":"466","creationTimestamp":"2023-12-05T20:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_01_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I1205 20:01:03.044000  100448 pod_ready.go:92] pod "kube-proxy-kgt9d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:03.044015  100448 pod_ready.go:81] duration metric: took 377.387451ms waiting for pod "kube-proxy-kgt9d" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:03.044024  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzfjz" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:03.241357  100448 request.go:629] Waited for 197.271206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kzfjz
	I1205 20:01:03.241442  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kzfjz
	I1205 20:01:03.241454  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:03.241466  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:03.241480  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:03.243751  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:03.243772  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:03.243778  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:03.243784  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:03.243795  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:03 GMT
	I1205 20:01:03.243803  100448 round_trippers.go:580]     Audit-Id: 236e5b81-c6c2-49a0-b883-bff94c2c462e
	I1205 20:01:03.243813  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:03.243821  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:03.243966  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kzfjz","generateName":"kube-proxy-","namespace":"kube-system","uid":"78fc1f07-e92e-4a48-a04c-62cc7cea5435","resourceVersion":"373","creationTimestamp":"2023-12-05T20:00:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b4f45069-b28b-42ec-8716-60005d2e7302","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:00:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b4f45069-b28b-42ec-8716-60005d2e7302\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:01:03.441732  100448 request.go:629] Waited for 197.341726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:03.441798  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:03.441803  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:03.441811  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:03.441820  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:03.444226  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:03.444246  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:03.444252  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:03.444258  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:03 GMT
	I1205 20:01:03.444263  100448 round_trippers.go:580]     Audit-Id: ddd825e5-30be-44b9-89b2-6990d5bdf483
	I1205 20:01:03.444268  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:03.444273  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:03.444279  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:03.444379  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:01:03.444696  100448 pod_ready.go:92] pod "kube-proxy-kzfjz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:03.444714  100448 pod_ready.go:81] duration metric: took 400.682863ms waiting for pod "kube-proxy-kzfjz" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:03.444725  100448 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:03.641066  100448 request.go:629] Waited for 196.269585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340918
	I1205 20:01:03.641136  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340918
	I1205 20:01:03.641144  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:03.641157  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:03.641170  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:03.643518  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:03.643543  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:03.643553  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:03.643559  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:03.643564  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:03.643572  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:03.643578  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:03 GMT
	I1205 20:01:03.643583  100448 round_trippers.go:580]     Audit-Id: 9c4a4a95-fea6-43be-9230-460a3523ac38
	I1205 20:01:03.643772  100448 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340918","namespace":"kube-system","uid":"249b098e-76fa-4946-b7e4-82846c7c7220","resourceVersion":"271","creationTimestamp":"2023-12-05T19:59:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3ff8214b1941928379220b4b7e0a1487","kubernetes.io/config.mirror":"3ff8214b1941928379220b4b7e0a1487","kubernetes.io/config.seen":"2023-12-05T19:59:59.280245687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T19:59:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:01:03.841346  100448 request.go:629] Waited for 197.210394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:03.841419  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-340918
	I1205 20:01:03.841425  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:03.841433  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:03.841445  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:03.843708  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:03.843734  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:03.843743  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:03.843751  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:03.843764  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:03.843771  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:03.843779  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:03 GMT
	I1205 20:01:03.843789  100448 round_trippers.go:580]     Audit-Id: 801a4d40-c478-4d6b-a2a1-8a3ad10f0b65
	I1205 20:01:03.843910  100448 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T19:59:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1205 20:01:03.844235  100448 pod_ready.go:92] pod "kube-scheduler-multinode-340918" in "kube-system" namespace has status "Ready":"True"
	I1205 20:01:03.844252  100448 pod_ready.go:81] duration metric: took 399.519653ms waiting for pod "kube-scheduler-multinode-340918" in "kube-system" namespace to be "Ready" ...
	I1205 20:01:03.844263  100448 pod_ready.go:38] duration metric: took 1.200690328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:01:03.844277  100448 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:01:03.844321  100448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:01:03.855228  100448 system_svc.go:56] duration metric: took 10.942135ms WaitForService to wait for kubelet.
	I1205 20:01:03.855256  100448 kubeadm.go:581] duration metric: took 2.738218702s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:01:03.855274  100448 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:01:04.041710  100448 request.go:629] Waited for 186.346378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1205 20:01:04.041780  100448 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1205 20:01:04.041788  100448 round_trippers.go:469] Request Headers:
	I1205 20:01:04.041796  100448 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:01:04.041805  100448 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:01:04.044411  100448 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:01:04.044436  100448 round_trippers.go:577] Response Headers:
	I1205 20:01:04.044446  100448 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d19988b5-54ac-4595-bdab-2a9519868618
	I1205 20:01:04.044470  100448 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:01:04 GMT
	I1205 20:01:04.044478  100448 round_trippers.go:580]     Audit-Id: 74406a42-97d4-433c-a9de-e86069a2c421
	I1205 20:01:04.044486  100448 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:01:04.044497  100448 round_trippers.go:580]     Content-Type: application/json
	I1205 20:01:04.044506  100448 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8843ab74-db1f-4518-a67b-a5f8d4da567f
	I1205 20:01:04.044711  100448 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"466"},"items":[{"metadata":{"name":"multinode-340918","uid":"7bb71a42-a31f-4e1c-8c16-2c33d2219e69","resourceVersion":"386","creationTimestamp":"2023-12-05T19:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340918","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-340918","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_00_00_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I1205 20:01:04.045380  100448 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 20:01:04.045401  100448 node_conditions.go:123] node cpu capacity is 8
	I1205 20:01:04.045414  100448 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 20:01:04.045420  100448 node_conditions.go:123] node cpu capacity is 8
	I1205 20:01:04.045429  100448 node_conditions.go:105] duration metric: took 190.150313ms to run NodePressure ...
	I1205 20:01:04.045441  100448 start.go:228] waiting for startup goroutines ...
	I1205 20:01:04.045476  100448 start.go:242] writing updated cluster config ...
	I1205 20:01:04.045818  100448 ssh_runner.go:195] Run: rm -f paused
	I1205 20:01:04.090152  100448 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:01:04.092416  100448 out.go:177] * Done! kubectl is now configured to use "multinode-340918" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 20:00:43 multinode-340918 crio[962]: time="2023-12-05 20:00:43.849534115Z" level=info msg="Created container 1ac4f92beba93d79af0014fd0b7db9721e867a660f7224175e21f22b3a328005: kube-system/coredns-5dd5756b68-skz8t/coredns" id=c3bce792-f0be-4b8b-bc3b-dcde90d10881 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:00:43 multinode-340918 crio[962]: time="2023-12-05 20:00:43.849543164Z" level=info msg="Starting container: c33458a0af46188da86244a111f8eb36ed0b07185d6794b329459feeeffaafe9" id=b3d9e9f7-a8e0-4dcb-bb94-31aa7a0b2c9a name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:00:43 multinode-340918 crio[962]: time="2023-12-05 20:00:43.850024621Z" level=info msg="Starting container: 1ac4f92beba93d79af0014fd0b7db9721e867a660f7224175e21f22b3a328005" id=3761a4df-060e-4099-b18a-83fd51a200dd name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:00:43 multinode-340918 crio[962]: time="2023-12-05 20:00:43.858843986Z" level=info msg="Started container" PID=2341 containerID=1ac4f92beba93d79af0014fd0b7db9721e867a660f7224175e21f22b3a328005 description=kube-system/coredns-5dd5756b68-skz8t/coredns id=3761a4df-060e-4099-b18a-83fd51a200dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c182f578bc3673483df70aedf12f66e292b0c992946f3215101cac65da2e581
	Dec 05 20:00:43 multinode-340918 crio[962]: time="2023-12-05 20:00:43.860795769Z" level=info msg="Started container" PID=2334 containerID=c33458a0af46188da86244a111f8eb36ed0b07185d6794b329459feeeffaafe9 description=kube-system/storage-provisioner/storage-provisioner id=b3d9e9f7-a8e0-4dcb-bb94-31aa7a0b2c9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=50ebba034d580c35b70b5a4763a8aa9a48757fdca860f7dd2f3bb68687bff192
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.125401784Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-fcrbt/POD" id=bafc326a-d1cf-4d4f-8168-0efd36e79808 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.125463857Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.139229142Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-fcrbt Namespace:default ID:8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104 UID:8d1dd5f3-370b-4812-a70a-2896b420ac8e NetNS:/var/run/netns/f22c985e-157a-4ba8-b8d8-f29dae57e188 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.139275245Z" level=info msg="Adding pod default_busybox-5bc68d56bd-fcrbt to CNI network \"kindnet\" (type=ptp)"
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.148432713Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-fcrbt Namespace:default ID:8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104 UID:8d1dd5f3-370b-4812-a70a-2896b420ac8e NetNS:/var/run/netns/f22c985e-157a-4ba8-b8d8-f29dae57e188 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.148570840Z" level=info msg="Checking pod default_busybox-5bc68d56bd-fcrbt for CNI network kindnet (type=ptp)"
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.179456445Z" level=info msg="Ran pod sandbox 8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104 with infra container: default/busybox-5bc68d56bd-fcrbt/POD" id=bafc326a-d1cf-4d4f-8168-0efd36e79808 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.180558013Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=0928b826-7259-4558-9504-1dc3dc4258c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.180830580Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=0928b826-7259-4558-9504-1dc3dc4258c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.181580429Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4f6de130-34a6-4c18-9941-df7a5b1871d0 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.187203330Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.467757944Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.992460800Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=4f6de130-34a6-4c18-9941-df7a5b1871d0 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.993543226Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2ed3ace4-e616-4332-9da1-303e46651757 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.994162807Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2ed3ace4-e616-4332-9da1-303e46651757 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.994946268Z" level=info msg="Creating container: default/busybox-5bc68d56bd-fcrbt/busybox" id=7daa2426-05dc-4250-9060-e56de783ef75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:01:05 multinode-340918 crio[962]: time="2023-12-05 20:01:05.995079723Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 20:01:06 multinode-340918 crio[962]: time="2023-12-05 20:01:06.066791425Z" level=info msg="Created container 05c9bfacb647d0a572161c10174b60c42efd9e1aa85bc6549918922d934b0b70: default/busybox-5bc68d56bd-fcrbt/busybox" id=7daa2426-05dc-4250-9060-e56de783ef75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:01:06 multinode-340918 crio[962]: time="2023-12-05 20:01:06.067464084Z" level=info msg="Starting container: 05c9bfacb647d0a572161c10174b60c42efd9e1aa85bc6549918922d934b0b70" id=3cffdae2-a22d-48b9-84a5-33cd8f551cad name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:01:06 multinode-340918 crio[962]: time="2023-12-05 20:01:06.077322517Z" level=info msg="Started container" PID=2520 containerID=05c9bfacb647d0a572161c10174b60c42efd9e1aa85bc6549918922d934b0b70 description=default/busybox-5bc68d56bd-fcrbt/busybox id=3cffdae2-a22d-48b9-84a5-33cd8f551cad name=/runtime.v1.RuntimeService/StartContainer sandboxID=8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	05c9bfacb647d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   8815aa79d41d7       busybox-5bc68d56bd-fcrbt
	1ac4f92beba93       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      26 seconds ago       Running             coredns                   0                   9c182f578bc36       coredns-5dd5756b68-skz8t
	c33458a0af461       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      26 seconds ago       Running             storage-provisioner       0                   50ebba034d580       storage-provisioner
	fa7bfaebf13c6       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   a833599dced66       kindnet-h9575
	40031b9dc7b3c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      58 seconds ago       Running             kube-proxy                0                   1ecf147d248ee       kube-proxy-kzfjz
	116bcef6cf319       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   280df3e50ca33       kube-scheduler-multinode-340918
	df4f38330e356       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   f6037646f1f72       etcd-multinode-340918
	230bd31d887ca       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   8adb6a41e4f78       kube-apiserver-multinode-340918
	acc2defbadf58       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   e23e43b8c704e       kube-controller-manager-multinode-340918
	
	* 
	* ==> coredns [1ac4f92beba93d79af0014fd0b7db9721e867a660f7224175e21f22b3a328005] <==
	* [INFO] 10.244.0.3:52037 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090993s
	[INFO] 10.244.1.2:44824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113464s
	[INFO] 10.244.1.2:51582 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002015291s
	[INFO] 10.244.1.2:42581 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093495s
	[INFO] 10.244.1.2:44202 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090836s
	[INFO] 10.244.1.2:59584 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001346398s
	[INFO] 10.244.1.2:50421 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000507s
	[INFO] 10.244.1.2:49592 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079876s
	[INFO] 10.244.1.2:35124 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047246s
	[INFO] 10.244.0.3:38306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096443s
	[INFO] 10.244.0.3:51421 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093641s
	[INFO] 10.244.0.3:41524 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082399s
	[INFO] 10.244.0.3:47721 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045993s
	[INFO] 10.244.1.2:33008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114717s
	[INFO] 10.244.1.2:57526 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103296s
	[INFO] 10.244.1.2:34971 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067098s
	[INFO] 10.244.1.2:56565 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004882s
	[INFO] 10.244.0.3:37188 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104708s
	[INFO] 10.244.0.3:51900 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098282s
	[INFO] 10.244.0.3:47810 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105721s
	[INFO] 10.244.0.3:52402 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098151s
	[INFO] 10.244.1.2:33434 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132082s
	[INFO] 10.244.1.2:49715 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008483s
	[INFO] 10.244.1.2:46150 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000072777s
	[INFO] 10.244.1.2:45725 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068921s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-340918
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340918
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-340918
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_00_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-340918
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:01:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:00:43 +0000   Tue, 05 Dec 2023 19:59:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:00:43 +0000   Tue, 05 Dec 2023 19:59:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:00:43 +0000   Tue, 05 Dec 2023 19:59:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:00:43 +0000   Tue, 05 Dec 2023 20:00:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-340918
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3bb48e888934db1ae186301da1a1f47
	  System UUID:                4e79772a-4cfd-4437-888b-7f70c89d50e1
	  Boot ID:                    cdc0538f-6890-4ebd-b17b-f40ba8f6605f
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fcrbt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-skz8t                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     58s
	  kube-system                 etcd-multinode-340918                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-h9575                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      59s
	  kube-system                 kube-apiserver-multinode-340918             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-multinode-340918    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-kzfjz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-340918             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 71s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s   kubelet          Node multinode-340918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s   kubelet          Node multinode-340918 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s   kubelet          Node multinode-340918 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           60s   node-controller  Node multinode-340918 event: Registered Node multinode-340918 in Controller
	  Normal  NodeReady                27s   kubelet          Node multinode-340918 status is now: NodeReady
	
	
	Name:               multinode-340918-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340918-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-340918
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_05T20_01_00_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:01:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-340918-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:01:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:01:02 +0000   Tue, 05 Dec 2023 20:01:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:01:02 +0000   Tue, 05 Dec 2023 20:01:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:01:02 +0000   Tue, 05 Dec 2023 20:01:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:01:02 +0000   Tue, 05 Dec 2023 20:01:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-340918-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f29cfd5bd0f4c47b2dc5085ab3258d0
	  System UUID:                b9d49672-dc68-45a8-9e6b-18a89f428016
	  Boot ID:                    cdc0538f-6890-4ebd-b17b-f40ba8f6605f
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pl2b5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-6ljpq               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10s
	  kube-system                 kube-proxy-kgt9d            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  RegisteredNode           10s                node-controller  Node multinode-340918-m02 event: Registered Node multinode-340918-m02 in Controller
	  Normal  NodeHasSufficientMemory  10s (x5 over 11s)  kubelet          Node multinode-340918-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x5 over 11s)  kubelet          Node multinode-340918-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x5 over 11s)  kubelet          Node multinode-340918-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8s                 kubelet          Node multinode-340918-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004954] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007949] FS-Cache: N-cookie d=00000000a3a7830d{9p.inode} n=00000000d068e7a4
	[  +0.008733] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.275652] FS-Cache: Duplicate cookie detected
	[  +0.004668] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006736] FS-Cache: O-cookie d=00000000a3a7830d{9p.inode} n=0000000022467da8
	[  +0.007351] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.005018] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=00000000a3a7830d{9p.inode} n=00000000e4e0acc4
	[  +0.008780] FS-Cache: N-key=[8] '0690130200000000'
	[Dec 5 19:50] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 19:51] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[  +1.008199] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[  +2.015878] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[Dec 5 19:52] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[  +8.187450] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[ +16.126869] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	[ +32.253734] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 66 58 d5 24 58 0e 32 42 cf 8d 23 ab 08 00
	
	* 
	* ==> etcd [df4f38330e3567cb4b1f286a538305123d9b7d4a478f7b35465cc074a166cf46] <==
	* {"level":"info","ts":"2023-12-05T19:59:54.031522Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-05T19:59:54.031874Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-05T19:59:54.03236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-05T19:59:54.032388Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T19:59:54.032329Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-05T19:59:54.032489Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-05T19:59:54.255662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T19:59:54.255704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T19:59:54.255738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-05T19:59:54.255753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T19:59:54.255759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-05T19:59:54.255767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-05T19:59:54.255775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-05T19:59:54.256798Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:59:54.257483Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-340918 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T19:59:54.257516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T19:59:54.257519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T19:59:54.257666Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T19:59:54.257694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-05T19:59:54.257974Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:59:54.258243Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:59:54.258303Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:59:54.258813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T19:59:54.258838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-05T20:00:50.05402Z","caller":"traceutil/trace.go:171","msg":"trace[1341739717] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"105.437877ms","start":"2023-12-05T20:00:49.948563Z","end":"2023-12-05T20:00:50.054001Z","steps":["trace[1341739717] 'process raft request'  (duration: 105.298445ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:01:10 up 43 min,  0 users,  load average: 1.04, 0.98, 0.66
	Linux multinode-340918 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [fa7bfaebf13c69ee7aae878531d9496d404c5342cac5f6326ca4c5d889503c7c] <==
	* I1205 20:00:12.739745       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1205 20:00:12.739846       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1205 20:00:12.740055       1 main.go:116] setting mtu 1500 for CNI 
	I1205 20:00:12.740080       1 main.go:146] kindnetd IP family: "ipv4"
	I1205 20:00:12.740102       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1205 20:00:43.057064       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1205 20:00:43.065120       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:00:43.065149       1 main.go:227] handling current node
	I1205 20:00:53.079633       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:00:53.079663       1 main.go:227] handling current node
	I1205 20:01:03.091719       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:01:03.091745       1 main.go:227] handling current node
	I1205 20:01:03.091755       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1205 20:01:03.091759       1 main.go:250] Node multinode-340918-m02 has CIDR [10.244.1.0/24] 
	I1205 20:01:03.091903       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [230bd31d887ca1dc749c3d899cea394f5631e0688585a2ee83a5308bcb2c29e5] <==
	* I1205 19:59:56.227883       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 19:59:56.228472       1 aggregator.go:166] initial CRD sync complete...
	I1205 19:59:56.228531       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 19:59:56.228583       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 19:59:56.228616       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:59:56.229159       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 19:59:56.229216       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 19:59:56.234451       1 controller.go:624] quota admission added evaluator for: namespaces
	I1205 19:59:56.240961       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 19:59:56.324759       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:59:57.082530       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 19:59:57.086052       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 19:59:57.086072       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 19:59:57.517102       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 19:59:57.555296       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 19:59:57.647311       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 19:59:57.655371       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1205 19:59:57.656593       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 19:59:57.661288       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:59:58.156812       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 19:59:59.221834       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 19:59:59.231909       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 19:59:59.240169       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 20:00:11.716603       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1205 20:00:11.865701       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [acc2defbadf58895209a4779449f4048b98d7af8dfcfd9f373974f45f3eda726] <==
	* I1205 20:00:43.439305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.057µs"
	I1205 20:00:43.461069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.771µs"
	I1205 20:00:44.505680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.11µs"
	I1205 20:00:44.531334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.862692ms"
	I1205 20:00:44.531447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.836µs"
	I1205 20:00:45.963549       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1205 20:01:00.498008       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-340918-m02\" does not exist"
	I1205 20:01:00.509117       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6ljpq"
	I1205 20:01:00.509221       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kgt9d"
	I1205 20:01:00.511248       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-340918-m02" podCIDRs=["10.244.1.0/24"]
	I1205 20:01:00.965356       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-340918-m02"
	I1205 20:01:00.965385       1 event.go:307] "Event occurred" object="multinode-340918-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-340918-m02 event: Registered Node multinode-340918-m02 in Controller"
	I1205 20:01:02.248290       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340918-m02"
	I1205 20:01:04.804046       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1205 20:01:04.811025       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pl2b5"
	I1205 20:01:04.816103       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fcrbt"
	I1205 20:01:04.820988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.186586ms"
	I1205 20:01:04.826050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.006215ms"
	I1205 20:01:04.838560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.456894ms"
	I1205 20:01:04.838685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="76.922µs"
	I1205 20:01:05.976119       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-pl2b5" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-pl2b5"
	I1205 20:01:06.555516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.361729ms"
	I1205 20:01:06.555633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.411µs"
	I1205 20:01:07.031247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.307839ms"
	I1205 20:01:07.031333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.847µs"
	
	* 
	* ==> kube-proxy [40031b9dc7b3cd09510307f161bda36fe4a4ebf0e4e869018b818eb8f351cd0a] <==
	* I1205 20:00:12.847654       1 server_others.go:69] "Using iptables proxy"
	I1205 20:00:12.856257       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1205 20:00:12.924542       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:00:12.927353       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:00:12.927394       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 20:00:12.927406       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 20:00:12.927443       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:00:12.927754       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:00:12.927773       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:00:12.928478       1 config.go:188] "Starting service config controller"
	I1205 20:00:12.928554       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:00:12.928640       1 config.go:315] "Starting node config controller"
	I1205 20:00:12.928676       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:00:12.928676       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:00:12.928728       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:00:13.028729       1 shared_informer.go:318] Caches are synced for node config
	I1205 20:00:13.028726       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:00:13.029826       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [116bcef6cf3192b84d1c9f1dbaa01228bf91165e48b53a376a4adf036145ed7b] <==
	* W1205 19:59:56.248458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:59:56.248794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 19:59:56.248999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:59:56.249013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:59:56.249301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:59:56.249316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 19:59:56.249692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:59:56.249705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:59:56.249818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:59:56.249828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:59:57.065543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:59:57.065576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 19:59:57.079935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:59:57.079964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:59:57.206906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:59:57.206935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:59:57.212469       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:59:57.212501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 19:59:57.226742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:59:57.226769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 19:59:57.229074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:59:57.229097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:59:57.238411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:59:57.238444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1205 19:59:57.842423       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 05 20:00:12 multinode-340918 kubelet[1591]: I1205 20:00:12.017287    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a47313d-f97d-4de3-9298-5aeee7cc15e9-xtables-lock\") pod \"kindnet-h9575\" (UID: \"5a47313d-f97d-4de3-9298-5aeee7cc15e9\") " pod="kube-system/kindnet-h9575"
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: I1205 20:00:12.017316    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a47313d-f97d-4de3-9298-5aeee7cc15e9-lib-modules\") pod \"kindnet-h9575\" (UID: \"5a47313d-f97d-4de3-9298-5aeee7cc15e9\") " pod="kube-system/kindnet-h9575"
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: I1205 20:00:12.017347    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzf2r\" (UniqueName: \"kubernetes.io/projected/5a47313d-f97d-4de3-9298-5aeee7cc15e9-kube-api-access-gzf2r\") pod \"kindnet-h9575\" (UID: \"5a47313d-f97d-4de3-9298-5aeee7cc15e9\") " pod="kube-system/kindnet-h9575"
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: I1205 20:00:12.017378    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fc1f07-e92e-4a48-a04c-62cc7cea5435-xtables-lock\") pod \"kube-proxy-kzfjz\" (UID: \"78fc1f07-e92e-4a48-a04c-62cc7cea5435\") " pod="kube-system/kube-proxy-kzfjz"
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: I1205 20:00:12.017433    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78fc1f07-e92e-4a48-a04c-62cc7cea5435-kube-proxy\") pod \"kube-proxy-kzfjz\" (UID: \"78fc1f07-e92e-4a48-a04c-62cc7cea5435\") " pod="kube-system/kube-proxy-kzfjz"
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: I1205 20:00:12.017464    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a47313d-f97d-4de3-9298-5aeee7cc15e9-cni-cfg\") pod \"kindnet-h9575\" (UID: \"5a47313d-f97d-4de3-9298-5aeee7cc15e9\") " pod="kube-system/kindnet-h9575"
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: W1205 20:00:12.325714    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio-1ecf147d248eeaf769e174c03d0ef3ac525cfbf90e1fb179e0e9477cc990f085 WatchSource:0}: Error finding container 1ecf147d248eeaf769e174c03d0ef3ac525cfbf90e1fb179e0e9477cc990f085: Status 404 returned error can't find the container with id 1ecf147d248eeaf769e174c03d0ef3ac525cfbf90e1fb179e0e9477cc990f085
	Dec 05 20:00:12 multinode-340918 kubelet[1591]: W1205 20:00:12.326196    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio-a833599dced66cad1378dc2680b0bd71393f7bfe8ab148795741d15e860816cf WatchSource:0}: Error finding container a833599dced66cad1378dc2680b0bd71393f7bfe8ab148795741d15e860816cf: Status 404 returned error can't find the container with id a833599dced66cad1378dc2680b0bd71393f7bfe8ab148795741d15e860816cf
	Dec 05 20:00:13 multinode-340918 kubelet[1591]: I1205 20:00:13.453075    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-h9575" podStartSLOduration=2.453023586 podCreationTimestamp="2023-12-05 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:00:13.45284583 +0000 UTC m=+14.254338702" watchObservedRunningTime="2023-12-05 20:00:13.453023586 +0000 UTC m=+14.254516459"
	Dec 05 20:00:13 multinode-340918 kubelet[1591]: I1205 20:00:13.465199    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kzfjz" podStartSLOduration=2.465149078 podCreationTimestamp="2023-12-05 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:00:13.46503725 +0000 UTC m=+14.266530140" watchObservedRunningTime="2023-12-05 20:00:13.465149078 +0000 UTC m=+14.266641951"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.414460    1591 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.437745    1591 topology_manager.go:215] "Topology Admit Handler" podUID="178fdf74-f6b5-4bfd-8c4e-7511303ab9c2" podNamespace="kube-system" podName="storage-provisioner"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.439274    1591 topology_manager.go:215] "Topology Admit Handler" podUID="d21b0f8e-2cfc-4fdf-a923-b997fb927fbe" podNamespace="kube-system" podName="coredns-5dd5756b68-skz8t"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.458533    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfdr\" (UniqueName: \"kubernetes.io/projected/d21b0f8e-2cfc-4fdf-a923-b997fb927fbe-kube-api-access-xhfdr\") pod \"coredns-5dd5756b68-skz8t\" (UID: \"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe\") " pod="kube-system/coredns-5dd5756b68-skz8t"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.458600    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlsh\" (UniqueName: \"kubernetes.io/projected/178fdf74-f6b5-4bfd-8c4e-7511303ab9c2-kube-api-access-vtlsh\") pod \"storage-provisioner\" (UID: \"178fdf74-f6b5-4bfd-8c4e-7511303ab9c2\") " pod="kube-system/storage-provisioner"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.458717    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/178fdf74-f6b5-4bfd-8c4e-7511303ab9c2-tmp\") pod \"storage-provisioner\" (UID: \"178fdf74-f6b5-4bfd-8c4e-7511303ab9c2\") " pod="kube-system/storage-provisioner"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: I1205 20:00:43.458779    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d21b0f8e-2cfc-4fdf-a923-b997fb927fbe-config-volume\") pod \"coredns-5dd5756b68-skz8t\" (UID: \"d21b0f8e-2cfc-4fdf-a923-b997fb927fbe\") " pod="kube-system/coredns-5dd5756b68-skz8t"
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: W1205 20:00:43.781064    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio-50ebba034d580c35b70b5a4763a8aa9a48757fdca860f7dd2f3bb68687bff192 WatchSource:0}: Error finding container 50ebba034d580c35b70b5a4763a8aa9a48757fdca860f7dd2f3bb68687bff192: Status 404 returned error can't find the container with id 50ebba034d580c35b70b5a4763a8aa9a48757fdca860f7dd2f3bb68687bff192
	Dec 05 20:00:43 multinode-340918 kubelet[1591]: W1205 20:00:43.781340    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio-9c182f578bc3673483df70aedf12f66e292b0c992946f3215101cac65da2e581 WatchSource:0}: Error finding container 9c182f578bc3673483df70aedf12f66e292b0c992946f3215101cac65da2e581: Status 404 returned error can't find the container with id 9c182f578bc3673483df70aedf12f66e292b0c992946f3215101cac65da2e581
	Dec 05 20:00:44 multinode-340918 kubelet[1591]: I1205 20:00:44.514970    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.51491625 podCreationTimestamp="2023-12-05 20:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:00:44.514712121 +0000 UTC m=+45.316205013" watchObservedRunningTime="2023-12-05 20:00:44.51491625 +0000 UTC m=+45.316409136"
	Dec 05 20:00:44 multinode-340918 kubelet[1591]: I1205 20:00:44.515067    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-skz8t" podStartSLOduration=32.515036061 podCreationTimestamp="2023-12-05 20:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:00:44.50562865 +0000 UTC m=+45.307121522" watchObservedRunningTime="2023-12-05 20:00:44.515036061 +0000 UTC m=+45.316528933"
	Dec 05 20:01:04 multinode-340918 kubelet[1591]: I1205 20:01:04.823375    1591 topology_manager.go:215] "Topology Admit Handler" podUID="8d1dd5f3-370b-4812-a70a-2896b420ac8e" podNamespace="default" podName="busybox-5bc68d56bd-fcrbt"
	Dec 05 20:01:04 multinode-340918 kubelet[1591]: I1205 20:01:04.887006    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smr4l\" (UniqueName: \"kubernetes.io/projected/8d1dd5f3-370b-4812-a70a-2896b420ac8e-kube-api-access-smr4l\") pod \"busybox-5bc68d56bd-fcrbt\" (UID: \"8d1dd5f3-370b-4812-a70a-2896b420ac8e\") " pod="default/busybox-5bc68d56bd-fcrbt"
	Dec 05 20:01:05 multinode-340918 kubelet[1591]: W1205 20:01:05.177095    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio-8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104 WatchSource:0}: Error finding container 8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104: Status 404 returned error can't find the container with id 8815aa79d41d77ff416f0cc200d938d569ac3d8f224b6a40bcadfdc43f88c104
	Dec 05 20:01:06 multinode-340918 kubelet[1591]: I1205 20:01:06.550171    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-fcrbt" podStartSLOduration=1.738061422 podCreationTimestamp="2023-12-05 20:01:04 +0000 UTC" firstStartedPulling="2023-12-05 20:01:05.181004317 +0000 UTC m=+65.982497182" lastFinishedPulling="2023-12-05 20:01:05.993058839 +0000 UTC m=+66.794551694" observedRunningTime="2023-12-05 20:01:06.550066851 +0000 UTC m=+67.351559741" watchObservedRunningTime="2023-12-05 20:01:06.550115934 +0000 UTC m=+67.351608808"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-340918 -n multinode-340918
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-340918 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.3044762043.exe start -p running-upgrade-032685 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.3044762043.exe start -p running-upgrade-032685 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m33.206067747s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-032685 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-032685 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.938097277s)

                                                
                                                
-- stdout --
	* [running-upgrade-032685] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-032685 in cluster running-upgrade-032685
	* Pulling base image ...
	* Updating the running docker "running-upgrade-032685" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:12:34.338777  173859 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:12:34.338952  173859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:12:34.338964  173859 out.go:309] Setting ErrFile to fd 2...
	I1205 20:12:34.338970  173859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:12:34.339154  173859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 20:12:34.339714  173859 out.go:303] Setting JSON to false
	I1205 20:12:34.341116  173859 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3306,"bootTime":1701803848,"procs":494,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:12:34.341178  173859 start.go:138] virtualization: kvm guest
	I1205 20:12:34.343567  173859 out.go:177] * [running-upgrade-032685] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:12:34.345793  173859 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:12:34.347998  173859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:12:34.345874  173859 notify.go:220] Checking for updates...
	I1205 20:12:34.351014  173859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:12:34.352635  173859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 20:12:34.354015  173859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:12:34.355352  173859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:12:34.357155  173859 config.go:182] Loaded profile config "running-upgrade-032685": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1205 20:12:34.357196  173859 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:12:34.359242  173859 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:12:34.360503  173859 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:12:34.386078  173859 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:12:34.386197  173859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:12:34.462026  173859 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:66 SystemTime:2023-12-05 20:12:34.451837626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:12:34.462119  173859 docker.go:295] overlay module found
	I1205 20:12:34.464407  173859 out.go:177] * Using the docker driver based on existing profile
	I1205 20:12:34.465802  173859 start.go:298] selected driver: docker
	I1205 20:12:34.465814  173859 start.go:902] validating driver "docker" against &{Name:running-upgrade-032685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-032685 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:12:34.465911  173859 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:12:34.466730  173859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:12:34.543321  173859 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:66 SystemTime:2023-12-05 20:12:34.534927063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:12:34.543748  173859 cni.go:84] Creating CNI manager for ""
	I1205 20:12:34.543776  173859 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1205 20:12:34.543788  173859 start_flags.go:323] config:
	{Name:running-upgrade-032685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-032685 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1205 20:12:34.546382  173859 out.go:177] * Starting control plane node running-upgrade-032685 in cluster running-upgrade-032685
	I1205 20:12:34.547840  173859 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:12:34.549652  173859 out.go:177] * Pulling base image ...
	I1205 20:12:34.550992  173859 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1205 20:12:34.551023  173859 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 20:12:34.574258  173859 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 20:12:34.574280  173859 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	W1205 20:12:34.574878  173859 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:12:34.574992  173859 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/running-upgrade-032685/config.json ...
	I1205 20:12:34.575100  173859 cache.go:107] acquiring lock: {Name:mk0753a08f5d80b6a23d94dc319693bb1f9358a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575132  173859 cache.go:107] acquiring lock: {Name:mk51d179dda57f30320f94e52606d4676d7e5022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575106  173859 cache.go:107] acquiring lock: {Name:mkd0b529ccbf2ffb4e8fde0013e7881d949ccdc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575230  173859 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:12:34.575275  173859 start.go:365] acquiring machines lock for running-upgrade-032685: {Name:mk8c6c888afab8c9323125abe79c56b2d4a1f529 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575191  173859 cache.go:107] acquiring lock: {Name:mk38c73be8d3341ed9f5736e6d6a141853dab68d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575170  173859 cache.go:107] acquiring lock: {Name:mkf8145b11132ce45b905f34e67e17492dec9e27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575288  173859 cache.go:107] acquiring lock: {Name:mkad735f631f96b27997b7246b0137d9fe1f086e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575176  173859 cache.go:107] acquiring lock: {Name:mk7a34361a8a448763070c06d96dda3b5a6779d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575382  173859 cache.go:107] acquiring lock: {Name:mkc51dff815689751389293ba259b64d5eac52fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:12:34.575588  173859 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:12:34.575802  173859 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1205 20:12:34.575847  173859 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1205 20:12:34.575936  173859 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1205 20:12:34.576040  173859 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1205 20:12:34.576128  173859 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:12:34.576152  173859 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.064735ms
	I1205 20:12:34.576170  173859 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:12:34.576047  173859 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1205 20:12:34.576282  173859 start.go:369] acquired machines lock for "running-upgrade-032685" in 986.227µs
	I1205 20:12:34.576303  173859 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:12:34.576310  173859 fix.go:54] fixHost starting: m01
	I1205 20:12:34.576334  173859 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1205 20:12:34.576598  173859 cli_runner.go:164] Run: docker container inspect running-upgrade-032685 --format={{.State.Status}}
	I1205 20:12:34.578317  173859 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1205 20:12:34.578316  173859 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1205 20:12:34.578393  173859 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1205 20:12:34.578588  173859 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1205 20:12:34.578684  173859 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1205 20:12:34.578970  173859 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1205 20:12:34.580431  173859 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:12:34.616014  173859 fix.go:102] recreateIfNeeded on running-upgrade-032685: state=Running err=<nil>
	W1205 20:12:34.616053  173859 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:12:34.621968  173859 out.go:177] * Updating the running docker "running-upgrade-032685" container ...
	I1205 20:12:34.623771  173859 machine.go:88] provisioning docker machine ...
	I1205 20:12:34.623813  173859 ubuntu.go:169] provisioning hostname "running-upgrade-032685"
	I1205 20:12:34.623875  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:34.647406  173859 main.go:141] libmachine: Using SSH client type: native
	I1205 20:12:34.647960  173859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I1205 20:12:34.647983  173859 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-032685 && echo "running-upgrade-032685" | sudo tee /etc/hostname
	I1205 20:12:34.748092  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1205 20:12:34.770944  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:12:34.777185  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1205 20:12:34.781424  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1205 20:12:34.790825  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1205 20:12:34.801504  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1205 20:12:34.819977  173859 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1205 20:12:34.821989  173859 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-032685
	
	I1205 20:12:34.822055  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:34.849054  173859 main.go:141] libmachine: Using SSH client type: native
	I1205 20:12:34.849407  173859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I1205 20:12:34.849430  173859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-032685' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-032685/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-032685' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:12:34.875252  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1205 20:12:34.875274  173859 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 299.930307ms
	I1205 20:12:34.875291  173859 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1205 20:12:34.965212  173859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:12:34.965272  173859 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 20:12:34.965389  173859 ubuntu.go:177] setting up certificates
	I1205 20:12:34.965417  173859 provision.go:83] configureAuth start
	I1205 20:12:34.965661  173859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-032685
	I1205 20:12:35.001014  173859 provision.go:138] copyHostCerts
	I1205 20:12:35.001134  173859 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem, removing ...
	I1205 20:12:35.001156  173859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 20:12:35.001219  173859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 20:12:35.001608  173859 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem, removing ...
	I1205 20:12:35.001647  173859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 20:12:35.002708  173859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 20:12:35.002830  173859 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem, removing ...
	I1205 20:12:35.002863  173859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 20:12:35.002905  173859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 20:12:35.002988  173859 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-032685 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-032685]
	I1205 20:12:35.256991  173859 provision.go:172] copyRemoteCerts
	I1205 20:12:35.257066  173859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:12:35.257116  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:35.288094  173859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/running-upgrade-032685/id_rsa Username:docker}
	I1205 20:12:35.378377  173859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:12:35.398607  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1205 20:12:35.398707  173859 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 823.462365ms
	I1205 20:12:35.398732  173859 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1205 20:12:35.401057  173859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:12:35.419767  173859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:12:35.444679  173859 provision.go:86] duration metric: configureAuth took 479.247865ms
	I1205 20:12:35.444740  173859 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:12:35.445058  173859 config.go:182] Loaded profile config "running-upgrade-032685": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1205 20:12:35.445189  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:35.467892  173859 main.go:141] libmachine: Using SSH client type: native
	I1205 20:12:35.468443  173859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I1205 20:12:35.468476  173859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:12:35.727978  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1205 20:12:35.728160  173859 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.152991583s
	I1205 20:12:35.728239  173859 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1205 20:12:35.938235  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1205 20:12:35.938332  173859 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.363209228s
	I1205 20:12:35.938354  173859 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1205 20:12:35.965829  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1205 20:12:35.965860  173859 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.390671286s
	I1205 20:12:35.965898  173859 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1205 20:12:36.022919  173859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:12:36.022945  173859 machine.go:91] provisioned docker machine in 1.399152689s
	I1205 20:12:36.022957  173859 start.go:300] post-start starting for "running-upgrade-032685" (driver="docker")
	I1205 20:12:36.022969  173859 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:12:36.023033  173859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:12:36.023078  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:36.043778  173859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/running-upgrade-032685/id_rsa Username:docker}
	I1205 20:12:36.131770  173859 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:12:36.134748  173859 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:12:36.134777  173859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:12:36.134791  173859 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:12:36.134799  173859 info.go:137] Remote host: Ubuntu 19.10
	I1205 20:12:36.134810  173859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 20:12:36.134870  173859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 20:12:36.134957  173859 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> 128832.pem in /etc/ssl/certs
	I1205 20:12:36.135061  173859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:12:36.141749  173859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /etc/ssl/certs/128832.pem (1708 bytes)
	I1205 20:12:36.158455  173859 start.go:303] post-start completed in 135.482461ms
	I1205 20:12:36.158544  173859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:12:36.158593  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:36.181865  173859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/running-upgrade-032685/id_rsa Username:docker}
	I1205 20:12:36.270670  173859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:12:36.279991  173859 fix.go:56] fixHost completed within 1.703673673s
	I1205 20:12:36.280040  173859 start.go:83] releasing machines lock for "running-upgrade-032685", held for 1.703744143s
	I1205 20:12:36.280133  173859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-032685
	I1205 20:12:36.301036  173859 ssh_runner.go:195] Run: cat /version.json
	I1205 20:12:36.301148  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:36.301058  173859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:12:36.301262  173859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-032685
	I1205 20:12:36.311657  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1205 20:12:36.311688  173859 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.736526278s
	I1205 20:12:36.311706  173859 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1205 20:12:36.320691  173859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/running-upgrade-032685/id_rsa Username:docker}
	I1205 20:12:36.320947  173859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/running-upgrade-032685/id_rsa Username:docker}
	W1205 20:12:36.433598  173859 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:12:36.649379  173859 cache.go:157] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1205 20:12:36.649412  173859 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.074316977s
	I1205 20:12:36.649423  173859 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1205 20:12:36.649439  173859 cache.go:87] Successfully saved all images to host disk.
	I1205 20:12:36.649501  173859 ssh_runner.go:195] Run: systemctl --version
	I1205 20:12:36.653792  173859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:12:36.713909  173859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:12:36.718331  173859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:12:36.734555  173859 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:12:36.734633  173859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:12:36.766151  173859 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:12:36.766179  173859 start.go:475] detecting cgroup driver to use...
	I1205 20:12:36.766212  173859 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:12:36.766254  173859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:12:36.793937  173859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:12:36.806759  173859 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:12:36.806839  173859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:12:36.821240  173859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:12:36.837526  173859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:12:36.851252  173859 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:12:36.851312  173859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:12:36.950531  173859 docker.go:219] disabling docker service ...
	I1205 20:12:36.950607  173859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:12:36.965520  173859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:12:36.979498  173859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:12:37.070635  173859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:12:37.158330  173859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:12:37.171975  173859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:12:37.187135  173859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:12:37.187206  173859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:12:37.198334  173859 out.go:177] 
	W1205 20:12:37.199824  173859 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:12:37.199848  173859 out.go:239] * 
	* 
	W1205 20:12:37.200883  173859 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:12:37.202514  173859 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-032685 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-05 20:12:37.224542499 +0000 UTC m=+2272.365083713
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-032685
helpers_test.go:235: (dbg) docker inspect running-upgrade-032685:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad868ac3d24b9c379aeffcac71fa999b965870e7a8ba2eb2daca85e91fa8149d",
	        "Created": "2023-12-05T20:11:20.288056425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 149520,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T20:11:21.357583047Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ad868ac3d24b9c379aeffcac71fa999b965870e7a8ba2eb2daca85e91fa8149d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad868ac3d24b9c379aeffcac71fa999b965870e7a8ba2eb2daca85e91fa8149d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad868ac3d24b9c379aeffcac71fa999b965870e7a8ba2eb2daca85e91fa8149d/hosts",
	        "LogPath": "/var/lib/docker/containers/ad868ac3d24b9c379aeffcac71fa999b965870e7a8ba2eb2daca85e91fa8149d/ad868ac3d24b9c379aeffcac71fa999b965870e7a8ba2eb2daca85e91fa8149d-json.log",
	        "Name": "/running-upgrade-032685",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-032685:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f7d07ecb5a8e6f7be974a4d470a437dd5f1ade4e168009af148a87b26bc571da-init/diff:/var/lib/docker/overlay2/f6ec9a983994ccc08147e6331d6f2022684e7b4eda074ca219bac7956c1c7b7c/diff:/var/lib/docker/overlay2/639684b512582e459ec81de6d89c1bd5429f0563db2332fc64894aeb0e1d1f05/diff:/var/lib/docker/overlay2/ba12b85060c90c99bcd4e06612f8233752b03a5827a9380a34d398b6d4867ae0/diff:/var/lib/docker/overlay2/b3d604e7197cf14f19f8f2945722260e422fc348b4369e23cc815e24235f3e77/diff:/var/lib/docker/overlay2/09bfff6d260a343c5df56526cb6dbc0342f425f0c8c822414a0c843d69643baf/diff:/var/lib/docker/overlay2/bf433967b77c570b8fee845fb10539f96ab513317ab115a501e1da38a861b2ee/diff:/var/lib/docker/overlay2/d80bf1ac285f2cc51777ef4cb628daf4a8c0b864f274da1a8167208b62d04dec/diff:/var/lib/docker/overlay2/d782fb2700e4cada91a3ccbd05f3d3e42c14423260ad9e6cc92097d6279a3098/diff:/var/lib/docker/overlay2/473d7984c0fb4ce6a71ff7b70b83abef740b4e942ca387ae43513a5a941226fa/diff:/var/lib/docker/overlay2/5c6065
5f98e6934d45f8dd79aafe399986a0697b6587b8302bf20e45b560ae21/diff:/var/lib/docker/overlay2/4156db4e816949416a2a3443dcf392378f309656356e941d7de74f68d4836f3d/diff:/var/lib/docker/overlay2/8d055df96d0785a1af5bb8a843590547d60a042a6a09d574b87b5123652c3e91/diff:/var/lib/docker/overlay2/bb7804981eb73dd48caef62ac5354fc7d6be5bdfb18ec1114fef9410dce709d4/diff:/var/lib/docker/overlay2/0a1eeedf19ac0f970d2035e2c51d1c7b0db5c73df02f43a7b8195b232dd1bd8a/diff:/var/lib/docker/overlay2/7bd86827e91b36bf31e71c6b245f2ca5602049b5b19ecaf241634c3100b1399f/diff:/var/lib/docker/overlay2/f2b4827a40b8ad871d07ffae7e3dff162670c33d693bea0eca6dae2842d0e8ee/diff:/var/lib/docker/overlay2/aff786e86ad1757ede017646a68c85a23ec717e93818ab7581a6d750476276be/diff:/var/lib/docker/overlay2/60e1b105c612d228e91f0c0a14464d7a146deb23c623e8323c4390c0d4ff19dd/diff:/var/lib/docker/overlay2/590c377afde4cfb7ec06557b25ed577ba79d46cfdcbcbe28fb34bed3405b8047/diff:/var/lib/docker/overlay2/6c7e3409795aebf6a5518dce9162e93096adc32191948b7151e0fd74135ca6ed/diff:/var/lib/d
ocker/overlay2/14eb8ca8f261b7b99284f41bc2c8f386f958b3db51b32790df53a1fa6ad5d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7d07ecb5a8e6f7be974a4d470a437dd5f1ade4e168009af148a87b26bc571da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7d07ecb5a8e6f7be974a4d470a437dd5f1ade4e168009af148a87b26bc571da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7d07ecb5a8e6f7be974a4d470a437dd5f1ade4e168009af148a87b26bc571da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-032685",
	                "Source": "/var/lib/docker/volumes/running-upgrade-032685/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-032685",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-032685",
	                "name.minikube.sigs.k8s.io": "running-upgrade-032685",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65bca06a622c8b289cf00d4235f9a022ec284614e2bcad58849ec3281c883e04",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32924"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32923"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65bca06a622c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "206708038c24eca05650f8f1f287dcc37d0951fa04588efb556c4f9ea4571e40",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "14a1f6d438fbd7813b03b1813119b0bf95632f7e6ea9584b7915a6d164df9d60",
	                    "EndpointID": "206708038c24eca05650f8f1f287dcc37d0951fa04588efb556c4f9ea4571e40",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-032685 -n running-upgrade-032685
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-032685 -n running-upgrade-032685: exit status 4 (380.076738ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:12:37.602761  175220 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-032685" does not appear in /home/jenkins/minikube-integration/17731-6088/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-032685" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-032685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-032685
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-032685: (2.210476914s)
--- FAIL: TestRunningBinaryUpgrade (99.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (69.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3626814299.exe start -p stopped-upgrade-519106 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3626814299.exe start -p stopped-upgrade-519106 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.026196419s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3626814299.exe -p stopped-upgrade-519106 stop
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-519106 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 20:14:28.652820   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-519106 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.63870354s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-519106] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-519106 in cluster stopped-upgrade-519106
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-519106" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:14:25.990294  199892 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:14:25.990428  199892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:14:25.990436  199892 out.go:309] Setting ErrFile to fd 2...
	I1205 20:14:25.990441  199892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:14:25.990674  199892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 20:14:25.991223  199892 out.go:303] Setting JSON to false
	I1205 20:14:25.992682  199892 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3418,"bootTime":1701803848,"procs":638,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:14:25.992749  199892 start.go:138] virtualization: kvm guest
	I1205 20:14:25.995024  199892 out.go:177] * [stopped-upgrade-519106] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:14:25.996335  199892 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:14:25.997667  199892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:14:25.996404  199892 notify.go:220] Checking for updates...
	I1205 20:14:26.000068  199892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:14:26.001396  199892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 20:14:26.002775  199892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:14:26.004092  199892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:14:26.005857  199892 config.go:182] Loaded profile config "stopped-upgrade-519106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1205 20:14:26.005877  199892 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:14:26.007672  199892 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:14:26.008839  199892 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:14:26.031301  199892 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:14:26.031439  199892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:14:26.085671  199892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:66 SystemTime:2023-12-05 20:14:26.076674376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:14:26.085763  199892 docker.go:295] overlay module found
	I1205 20:14:26.087690  199892 out.go:177] * Using the docker driver based on existing profile
	I1205 20:14:26.089090  199892 start.go:298] selected driver: docker
	I1205 20:14:26.089103  199892 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-519106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-519106 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:14:26.089171  199892 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:14:26.089960  199892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:14:26.141737  199892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:66 SystemTime:2023-12-05 20:14:26.132916464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:14:26.142079  199892 cni.go:84] Creating CNI manager for ""
	I1205 20:14:26.142102  199892 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1205 20:14:26.142114  199892 start_flags.go:323] config:
	{Name:stopped-upgrade-519106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-519106 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1205 20:14:26.144792  199892 out.go:177] * Starting control plane node stopped-upgrade-519106 in cluster stopped-upgrade-519106
	I1205 20:14:26.146175  199892 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:14:26.147688  199892 out.go:177] * Pulling base image ...
	I1205 20:14:26.149137  199892 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1205 20:14:26.149166  199892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 20:14:26.166514  199892 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 20:14:26.166542  199892 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	W1205 20:14:26.172226  199892 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:14:26.172370  199892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/stopped-upgrade-519106/config.json ...
	I1205 20:14:26.172475  199892 cache.go:107] acquiring lock: {Name:mk51d179dda57f30320f94e52606d4676d7e5022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172488  199892 cache.go:107] acquiring lock: {Name:mk38c73be8d3341ed9f5736e6d6a141853dab68d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172516  199892 cache.go:107] acquiring lock: {Name:mkc51dff815689751389293ba259b64d5eac52fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172462  199892 cache.go:107] acquiring lock: {Name:mk0753a08f5d80b6a23d94dc319693bb1f9358a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172534  199892 cache.go:107] acquiring lock: {Name:mk7a34361a8a448763070c06d96dda3b5a6779d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172558  199892 cache.go:107] acquiring lock: {Name:mkad735f631f96b27997b7246b0137d9fe1f086e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172653  199892 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:14:26.172666  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1205 20:14:26.172671  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1205 20:14:26.172683  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:14:26.172686  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1205 20:14:26.172684  199892 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 172.833µs
	I1205 20:14:26.172697  199892 start.go:365] acquiring machines lock for stopped-upgrade-519106: {Name:mk2124f7bd102428155c3c1fcbd0ef0a2670f256 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172708  199892 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1205 20:14:26.172705  199892 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 244.512µs
	I1205 20:14:26.172717  199892 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1205 20:14:26.172687  199892 cache.go:107] acquiring lock: {Name:mkf8145b11132ce45b905f34e67e17492dec9e27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172728  199892 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 275.888µs
	I1205 20:14:26.172740  199892 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:14:26.172683  199892 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 160.105µs
	I1205 20:14:26.172755  199892 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1205 20:14:26.172671  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1205 20:14:26.172772  199892 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 214.47µs
	I1205 20:14:26.172738  199892 cache.go:107] acquiring lock: {Name:mkd0b529ccbf2ffb4e8fde0013e7881d949ccdc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:26.172784  199892 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1205 20:14:26.172774  199892 start.go:369] acquired machines lock for "stopped-upgrade-519106" in 59.841µs
	I1205 20:14:26.172801  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1205 20:14:26.172805  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1205 20:14:26.172820  199892 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 251.807µs
	I1205 20:14:26.172835  199892 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1205 20:14:26.172819  199892 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 340.525µs
	I1205 20:14:26.172847  199892 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1205 20:14:26.172804  199892 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:14:26.172861  199892 fix.go:54] fixHost starting: m01
	I1205 20:14:26.172837  199892 cache.go:115] /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1205 20:14:26.173004  199892 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 293.651µs
	I1205 20:14:26.173024  199892 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1205 20:14:26.173040  199892 cache.go:87] Successfully saved all images to host disk.
	I1205 20:14:26.173119  199892 cli_runner.go:164] Run: docker container inspect stopped-upgrade-519106 --format={{.State.Status}}
	I1205 20:14:26.190443  199892 fix.go:102] recreateIfNeeded on stopped-upgrade-519106: state=Stopped err=<nil>
	W1205 20:14:26.190479  199892 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:14:26.192967  199892 out.go:177] * Restarting existing docker container for "stopped-upgrade-519106" ...
	I1205 20:14:26.194536  199892 cli_runner.go:164] Run: docker start stopped-upgrade-519106
	I1205 20:14:26.459025  199892 cli_runner.go:164] Run: docker container inspect stopped-upgrade-519106 --format={{.State.Status}}
	I1205 20:14:26.477067  199892 kic.go:430] container "stopped-upgrade-519106" state is running.
	I1205 20:14:26.477450  199892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-519106
	I1205 20:14:26.493767  199892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/stopped-upgrade-519106/config.json ...
	I1205 20:14:26.493967  199892 machine.go:88] provisioning docker machine ...
	I1205 20:14:26.493986  199892 ubuntu.go:169] provisioning hostname "stopped-upgrade-519106"
	I1205 20:14:26.494028  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:26.511039  199892 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:26.511390  199892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I1205 20:14:26.511410  199892 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-519106 && echo "stopped-upgrade-519106" | sudo tee /etc/hostname
	I1205 20:14:26.512004  199892 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51058->127.0.0.1:32984: read: connection reset by peer
	I1205 20:14:29.625115  199892 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-519106
	
	I1205 20:14:29.625207  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:29.643105  199892 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:29.643572  199892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I1205 20:14:29.643682  199892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-519106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-519106/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-519106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:14:29.748111  199892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:14:29.748136  199892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6088/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6088/.minikube}
	I1205 20:14:29.748174  199892 ubuntu.go:177] setting up certificates
	I1205 20:14:29.748185  199892 provision.go:83] configureAuth start
	I1205 20:14:29.748281  199892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-519106
	I1205 20:14:29.765203  199892 provision.go:138] copyHostCerts
	I1205 20:14:29.765259  199892 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem, removing ...
	I1205 20:14:29.765272  199892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem
	I1205 20:14:29.765338  199892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/ca.pem (1078 bytes)
	I1205 20:14:29.765434  199892 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem, removing ...
	I1205 20:14:29.765442  199892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem
	I1205 20:14:29.765465  199892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/cert.pem (1123 bytes)
	I1205 20:14:29.765526  199892 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem, removing ...
	I1205 20:14:29.765534  199892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem
	I1205 20:14:29.765553  199892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6088/.minikube/key.pem (1679 bytes)
	I1205 20:14:29.765608  199892 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-519106 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-519106]
	I1205 20:14:30.040853  199892 provision.go:172] copyRemoteCerts
	I1205 20:14:30.040918  199892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:14:30.040956  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:30.058668  199892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/stopped-upgrade-519106/id_rsa Username:docker}
	I1205 20:14:30.139571  199892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:14:30.157572  199892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:14:30.175442  199892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:14:30.192900  199892 provision.go:86] duration metric: configureAuth took 444.673879ms
	I1205 20:14:30.192924  199892 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:14:30.193075  199892 config.go:182] Loaded profile config "stopped-upgrade-519106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1205 20:14:30.193160  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:30.211602  199892 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:30.211908  199892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I1205 20:14:30.211927  199892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:14:30.764305  199892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:14:30.764333  199892 machine.go:91] provisioned docker machine in 4.270351873s
	I1205 20:14:30.764344  199892 start.go:300] post-start starting for "stopped-upgrade-519106" (driver="docker")
	I1205 20:14:30.764357  199892 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:14:30.764422  199892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:14:30.764468  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:30.782539  199892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/stopped-upgrade-519106/id_rsa Username:docker}
	I1205 20:14:30.863314  199892 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:14:30.866268  199892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:14:30.866301  199892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:14:30.866310  199892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:14:30.866316  199892 info.go:137] Remote host: Ubuntu 19.10
	I1205 20:14:30.866326  199892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/addons for local assets ...
	I1205 20:14:30.866370  199892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6088/.minikube/files for local assets ...
	I1205 20:14:30.866437  199892 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem -> 128832.pem in /etc/ssl/certs
	I1205 20:14:30.866549  199892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:14:30.873620  199892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/ssl/certs/128832.pem --> /etc/ssl/certs/128832.pem (1708 bytes)
	I1205 20:14:30.890922  199892 start.go:303] post-start completed in 126.561259ms
	I1205 20:14:30.891005  199892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:14:30.891051  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:30.911261  199892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/stopped-upgrade-519106/id_rsa Username:docker}
	I1205 20:14:30.988817  199892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:14:30.993418  199892 fix.go:56] fixHost completed within 4.82055204s
	I1205 20:14:30.993448  199892 start.go:83] releasing machines lock for "stopped-upgrade-519106", held for 4.820653648s
	I1205 20:14:30.993529  199892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-519106
	I1205 20:14:31.012567  199892 ssh_runner.go:195] Run: cat /version.json
	I1205 20:14:31.012640  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:31.012655  199892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:14:31.012707  199892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519106
	I1205 20:14:31.034597  199892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/stopped-upgrade-519106/id_rsa Username:docker}
	I1205 20:14:31.034814  199892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/stopped-upgrade-519106/id_rsa Username:docker}
	W1205 20:14:31.148528  199892 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:14:31.148613  199892 ssh_runner.go:195] Run: systemctl --version
	I1205 20:14:31.152681  199892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:14:31.205700  199892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:14:31.209985  199892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:14:31.225487  199892 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:14:31.225591  199892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:14:31.249787  199892 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:14:31.249809  199892 start.go:475] detecting cgroup driver to use...
	I1205 20:14:31.249837  199892 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:14:31.249878  199892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:14:31.273410  199892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:14:31.283919  199892 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:14:31.284001  199892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:14:31.293113  199892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:14:31.301714  199892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:14:31.310435  199892 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:14:31.310492  199892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:14:31.382112  199892 docker.go:219] disabling docker service ...
	I1205 20:14:31.382179  199892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:14:31.391568  199892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:14:31.400331  199892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:14:31.460827  199892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:14:31.527766  199892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:14:31.536933  199892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:14:31.549058  199892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:14:31.549123  199892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:14:31.558719  199892 out.go:177] 
	W1205 20:14:31.560330  199892 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:14:31.560350  199892 out.go:239] * 
	* 
	W1205 20:14:31.561209  199892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:14:31.563267  199892 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-519106 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (69.58s)

                                                
                                    

Test pass (281/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.58
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 7.34
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.1/json-events 9.02
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.21
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
25 TestDownloadOnlyKic 1.31
26 TestBinaryMirror 0.74
27 TestOffline 56.23
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 131.77
34 TestAddons/parallel/Registry 15.93
37 TestAddons/parallel/MetricsServer 5.65
38 TestAddons/parallel/HelmTiller 8.95
40 TestAddons/parallel/CSI 80.3
41 TestAddons/parallel/Headlamp 12.38
42 TestAddons/parallel/CloudSpanner 5.49
43 TestAddons/parallel/LocalPath 51.94
44 TestAddons/parallel/NvidiaDevicePlugin 5.47
47 TestAddons/serial/GCPAuth/Namespaces 0.12
48 TestAddons/StoppedEnableDisable 12.18
49 TestCertOptions 33.76
50 TestCertExpiration 227.87
52 TestForceSystemdFlag 36.4
53 TestForceSystemdEnv 38.57
55 TestKVMDriverInstallOrUpdate 1.56
59 TestErrorSpam/setup 22.6
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.9
62 TestErrorSpam/pause 1.53
63 TestErrorSpam/unpause 1.51
64 TestErrorSpam/stop 1.44
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.92
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 32.96
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.84
76 TestFunctional/serial/CacheCmd/cache/add_local 0.75
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 39.44
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.35
87 TestFunctional/serial/LogsFileCmd 1.36
88 TestFunctional/serial/InvalidService 4.07
90 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DashboardCmd 8.27
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.1
98 TestFunctional/parallel/ServiceCmdConnect 10.81
99 TestFunctional/parallel/AddonsCmd 0.23
100 TestFunctional/parallel/PersistentVolumeClaim 31.66
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.48
104 TestFunctional/parallel/MySQL 21.98
105 TestFunctional/parallel/FileSync 0.35
106 TestFunctional/parallel/CertSync 2.41
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
114 TestFunctional/parallel/License 0.16
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.38
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
127 TestFunctional/parallel/ImageCommands/ImageBuild 4.06
128 TestFunctional/parallel/ImageCommands/Setup 1.02
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.16
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.53
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 0.56
140 TestFunctional/parallel/MountCmd/any-port 9.78
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.58
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.8
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.51
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.19
145 TestFunctional/parallel/ServiceCmd/DeployApp 6.15
146 TestFunctional/parallel/MountCmd/specific-port 1.68
147 TestFunctional/parallel/MountCmd/VerifyCleanup 0.99
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
149 TestFunctional/parallel/ProfileCmd/profile_list 0.42
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
151 TestFunctional/parallel/ServiceCmd/List 0.98
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.75
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.69
154 TestFunctional/parallel/ServiceCmd/Format 0.63
155 TestFunctional/parallel/ServiceCmd/URL 0.61
156 TestFunctional/delete_addon-resizer_images 0.07
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 64
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.3
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
169 TestJSONOutput/start/Command 70.58
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.65
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.6
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.73
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.23
194 TestKicCustomNetwork/create_custom_network 33.89
195 TestKicCustomNetwork/use_default_bridge_network 24.44
196 TestKicExistingNetwork 25.2
197 TestKicCustomSubnet 27.12
198 TestKicStaticIP 27.61
199 TestMainNoArgs 0.06
200 TestMinikubeProfile 53.67
203 TestMountStart/serial/StartWithMountFirst 8.05
204 TestMountStart/serial/VerifyMountFirst 0.26
205 TestMountStart/serial/StartWithMountSecond 8.07
206 TestMountStart/serial/VerifyMountSecond 0.26
207 TestMountStart/serial/DeleteFirst 1.64
208 TestMountStart/serial/VerifyMountPostDelete 0.25
209 TestMountStart/serial/Stop 1.22
210 TestMountStart/serial/RestartStopped 6.87
211 TestMountStart/serial/VerifyMountPostStop 0.28
214 TestMultiNode/serial/FreshStart2Nodes 86.67
215 TestMultiNode/serial/DeployApp2Nodes 4.06
217 TestMultiNode/serial/AddNode 60.25
218 TestMultiNode/serial/MultiNodeLabels 0.06
219 TestMultiNode/serial/ProfileList 0.28
220 TestMultiNode/serial/CopyFile 9.4
221 TestMultiNode/serial/StopNode 2.15
222 TestMultiNode/serial/StartAfterStop 11.02
223 TestMultiNode/serial/RestartKeepsNodes 111.52
224 TestMultiNode/serial/DeleteNode 4.7
225 TestMultiNode/serial/StopMultiNode 23.85
226 TestMultiNode/serial/RestartMultiNode 75.05
227 TestMultiNode/serial/ValidateNameConflict 27.26
232 TestPreload 147.68
234 TestScheduledStopUnix 99.25
237 TestInsufficientStorage 10.55
240 TestKubernetesUpgrade 354.4
241 TestMissingContainerUpgrade 159.2
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
244 TestNoKubernetes/serial/StartWithK8s 36.55
245 TestNoKubernetes/serial/StartWithStopK8s 8.71
246 TestNoKubernetes/serial/Start 10.24
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
248 TestNoKubernetes/serial/ProfileList 1.27
252 TestNoKubernetes/serial/Stop 1.44
253 TestNoKubernetes/serial/StartNoArgs 7.8
258 TestNetworkPlugins/group/false 4.23
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
270 TestStoppedBinaryUpgrade/Setup 0.75
273 TestPause/serial/Start 42.96
274 TestPause/serial/SecondStartNoReconfiguration 61.83
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.52
276 TestNetworkPlugins/group/auto/Start 70.12
277 TestPause/serial/Pause 0.73
278 TestPause/serial/VerifyStatus 0.3
279 TestPause/serial/Unpause 0.62
280 TestPause/serial/PauseAgain 0.76
281 TestPause/serial/DeletePaused 2.6
282 TestPause/serial/VerifyDeletedResources 0.64
283 TestNetworkPlugins/group/kindnet/Start 71.09
284 TestNetworkPlugins/group/auto/KubeletFlags 0.29
285 TestNetworkPlugins/group/auto/NetCatPod 9.3
286 TestNetworkPlugins/group/auto/DNS 0.17
287 TestNetworkPlugins/group/auto/Localhost 0.17
288 TestNetworkPlugins/group/auto/HairPin 0.16
289 TestNetworkPlugins/group/calico/Start 64.23
290 TestNetworkPlugins/group/custom-flannel/Start 54.62
291 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
293 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
294 TestNetworkPlugins/group/kindnet/DNS 0.16
295 TestNetworkPlugins/group/kindnet/Localhost 0.16
296 TestNetworkPlugins/group/kindnet/HairPin 0.16
297 TestNetworkPlugins/group/calico/ControllerPod 5.02
298 TestNetworkPlugins/group/enable-default-cni/Start 70.18
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
300 TestNetworkPlugins/group/calico/KubeletFlags 0.36
301 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.64
302 TestNetworkPlugins/group/calico/NetCatPod 12.59
303 TestNetworkPlugins/group/custom-flannel/DNS 0.16
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
306 TestNetworkPlugins/group/calico/DNS 0.18
307 TestNetworkPlugins/group/calico/Localhost 0.16
308 TestNetworkPlugins/group/calico/HairPin 0.15
309 TestNetworkPlugins/group/flannel/Start 61.8
310 TestNetworkPlugins/group/bridge/Start 80.44
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
312 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
317 TestStartStop/group/old-k8s-version/serial/FirstStart 126.89
318 TestNetworkPlugins/group/flannel/ControllerPod 5.02
320 TestStartStop/group/no-preload/serial/FirstStart 65.05
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
322 TestNetworkPlugins/group/flannel/NetCatPod 10.4
323 TestNetworkPlugins/group/flannel/DNS 0.18
324 TestNetworkPlugins/group/flannel/Localhost 0.16
325 TestNetworkPlugins/group/flannel/HairPin 0.14
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
327 TestNetworkPlugins/group/bridge/NetCatPod 9.31
328 TestNetworkPlugins/group/bridge/DNS 0.22
329 TestNetworkPlugins/group/bridge/Localhost 0.17
330 TestNetworkPlugins/group/bridge/HairPin 0.16
332 TestStartStop/group/embed-certs/serial/FirstStart 71.41
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.85
335 TestStartStop/group/no-preload/serial/DeployApp 8.83
336 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
337 TestStartStop/group/no-preload/serial/Stop 11.94
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/no-preload/serial/SecondStart 343.35
340 TestStartStop/group/embed-certs/serial/DeployApp 8.35
341 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
344 TestStartStop/group/embed-certs/serial/Stop 11.99
345 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
346 TestStartStop/group/old-k8s-version/serial/Stop 11.96
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.31
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
350 TestStartStop/group/embed-certs/serial/SecondStart 341.42
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
352 TestStartStop/group/old-k8s-version/serial/SecondStart 438.99
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.76
355 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.02
356 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
358 TestStartStop/group/no-preload/serial/Pause 2.99
360 TestStartStop/group/newest-cni/serial/FirstStart 37.61
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.03
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.02
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
365 TestStartStop/group/newest-cni/serial/DeployApp 0
366 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
367 TestStartStop/group/newest-cni/serial/Stop 3.03
368 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
369 TestStartStop/group/embed-certs/serial/Pause 3.82
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.62
373 TestStartStop/group/newest-cni/serial/SecondStart 26.76
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
377 TestStartStop/group/newest-cni/serial/Pause 2.66
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
381 TestStartStop/group/old-k8s-version/serial/Pause 2.68
x
+
TestDownloadOnly/v1.16.0/json-events (11.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-428164 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-428164 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.584101554s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-428164
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-428164: exit status 85 (75.777005ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-428164 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-428164        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:34:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:34:44.957959   12895 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:34:44.958123   12895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:44.958132   12895 out.go:309] Setting ErrFile to fd 2...
	I1205 19:34:44.958136   12895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:44.958312   12895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	W1205 19:34:44.958415   12895 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-6088/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-6088/.minikube/config/config.json: no such file or directory
	I1205 19:34:44.958978   12895 out.go:303] Setting JSON to true
	I1205 19:34:44.959836   12895 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1037,"bootTime":1701803848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:34:44.959892   12895 start.go:138] virtualization: kvm guest
	I1205 19:34:44.962839   12895 out.go:97] [download-only-428164] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:34:44.964372   12895 out.go:169] MINIKUBE_LOCATION=17731
	W1205 19:34:44.962961   12895 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:34:44.963011   12895 notify.go:220] Checking for updates...
	I1205 19:34:44.967663   12895 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:34:44.969159   12895 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:34:44.970695   12895 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:34:44.972309   12895 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:34:44.975333   12895 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:34:44.975618   12895 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:34:44.997449   12895 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:34:44.997566   12895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:34:45.334835   12895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 19:34:45.326751961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:34:45.334939   12895 docker.go:295] overlay module found
	I1205 19:34:45.336995   12895 out.go:97] Using the docker driver based on user configuration
	I1205 19:34:45.337019   12895 start.go:298] selected driver: docker
	I1205 19:34:45.337023   12895 start.go:902] validating driver "docker" against <nil>
	I1205 19:34:45.337115   12895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:34:45.388483   12895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 19:34:45.380398428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:34:45.388636   12895 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:34:45.389123   12895 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1205 19:34:45.389294   12895 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:34:45.391344   12895 out.go:169] Using Docker driver with root privileges
	I1205 19:34:45.392915   12895 cni.go:84] Creating CNI manager for ""
	I1205 19:34:45.392938   12895 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:34:45.392950   12895 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:34:45.392969   12895 start_flags.go:323] config:
	{Name:download-only-428164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-428164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:34:45.394695   12895 out.go:97] Starting control plane node download-only-428164 in cluster download-only-428164
	I1205 19:34:45.394718   12895 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:34:45.396268   12895 out.go:97] Pulling base image ...
	I1205 19:34:45.396298   12895 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:45.396395   12895 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:34:45.411109   12895 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:34:45.411271   12895 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:34:45.411379   12895 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:34:45.420018   12895 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:34:45.420045   12895 cache.go:56] Caching tarball of preloaded images
	I1205 19:34:45.420154   12895 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:45.422439   12895 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1205 19:34:45.422461   12895 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:45.446248   12895 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:34:49.211281   12895 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:49.211373   12895 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:50.131338   12895 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1205 19:34:50.131699   12895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/download-only-428164/config.json ...
	I1205 19:34:50.131739   12895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/download-only-428164/config.json: {Name:mk63df8fe6aa741406349fcedc71b26734a94c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:34:50.131928   12895 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:50.132122   12895 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-428164"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-428164 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-428164 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.336985008s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-428164
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-428164: exit status 85 (72.452797ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-428164 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-428164        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-428164 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-428164        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:34:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:34:56.621216   13062 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:34:56.621485   13062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:56.621498   13062 out.go:309] Setting ErrFile to fd 2...
	I1205 19:34:56.621506   13062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:56.621705   13062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	W1205 19:34:56.621833   13062 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-6088/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-6088/.minikube/config/config.json: no such file or directory
	I1205 19:34:56.622233   13062 out.go:303] Setting JSON to true
	I1205 19:34:56.623097   13062 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1049,"bootTime":1701803848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:34:56.623185   13062 start.go:138] virtualization: kvm guest
	I1205 19:34:56.625527   13062 out.go:97] [download-only-428164] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:34:56.627311   13062 out.go:169] MINIKUBE_LOCATION=17731
	I1205 19:34:56.625673   13062 notify.go:220] Checking for updates...
	I1205 19:34:56.630279   13062 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:34:56.631820   13062 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:34:56.633207   13062 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:34:56.634527   13062 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:34:56.636895   13062 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:34:56.637337   13062 config.go:182] Loaded profile config "download-only-428164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1205 19:34:56.637375   13062 start.go:810] api.Load failed for download-only-428164: filestore "download-only-428164": Docker machine "download-only-428164" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:34:56.637445   13062 driver.go:392] Setting default libvirt URI to qemu:///system
	W1205 19:34:56.637470   13062 start.go:810] api.Load failed for download-only-428164: filestore "download-only-428164": Docker machine "download-only-428164" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:34:56.657227   13062 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:34:56.657318   13062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:34:56.707894   13062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:34:56.699811971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:34:56.707995   13062 docker.go:295] overlay module found
	I1205 19:34:56.710052   13062 out.go:97] Using the docker driver based on existing profile
	I1205 19:34:56.710073   13062 start.go:298] selected driver: docker
	I1205 19:34:56.710078   13062 start.go:902] validating driver "docker" against &{Name:download-only-428164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-428164 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:34:56.710202   13062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:34:56.763738   13062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:34:56.755825379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:34:56.764449   13062 cni.go:84] Creating CNI manager for ""
	I1205 19:34:56.764473   13062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:34:56.764488   13062 start_flags.go:323] config:
	{Name:download-only-428164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-428164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1205 19:34:56.766819   13062 out.go:97] Starting control plane node download-only-428164 in cluster download-only-428164
	I1205 19:34:56.766844   13062 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:34:56.768543   13062 out.go:97] Pulling base image ...
	I1205 19:34:56.768599   13062 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:34:56.768628   13062 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:34:56.783805   13062 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:34:56.783933   13062 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:34:56.783949   13062 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:34:56.783953   13062 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:34:56.783960   13062 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:34:56.789124   13062 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:34:56.789151   13062 cache.go:56] Caching tarball of preloaded images
	I1205 19:34:56.789283   13062 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:34:56.792062   13062 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1205 19:34:56.792093   13062 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:56.821202   13062 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-428164"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (9.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-428164 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-428164 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.014868338s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (9.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-428164
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-428164: exit status 85 (74.616491ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-428164 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-428164           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-428164 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-428164           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-428164 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |          |
	|         | -p download-only-428164           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:04.031563   13206 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:04.031717   13206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:04.031727   13206 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:04.031732   13206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:04.031983   13206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	W1205 19:35:04.032125   13206 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-6088/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-6088/.minikube/config/config.json: no such file or directory
	I1205 19:35:04.032634   13206 out.go:303] Setting JSON to true
	I1205 19:35:04.033437   13206 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1056,"bootTime":1701803848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:04.033495   13206 start.go:138] virtualization: kvm guest
	I1205 19:35:04.035902   13206 out.go:97] [download-only-428164] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:04.037561   13206 out.go:169] MINIKUBE_LOCATION=17731
	I1205 19:35:04.036095   13206 notify.go:220] Checking for updates...
	I1205 19:35:04.040552   13206 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:04.042011   13206 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:35:04.043558   13206 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:35:04.045170   13206 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:35:04.047790   13206 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:35:04.048300   13206 config.go:182] Loaded profile config "download-only-428164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1205 19:35:04.048356   13206 start.go:810] api.Load failed for download-only-428164: filestore "download-only-428164": Docker machine "download-only-428164" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:04.048451   13206 driver.go:392] Setting default libvirt URI to qemu:///system
	W1205 19:35:04.048497   13206 start.go:810] api.Load failed for download-only-428164: filestore "download-only-428164": Docker machine "download-only-428164" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:04.070109   13206 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:04.070214   13206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:04.120530   13206 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:35:04.112107446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:35:04.120621   13206 docker.go:295] overlay module found
	I1205 19:35:04.122628   13206 out.go:97] Using the docker driver based on existing profile
	I1205 19:35:04.122649   13206 start.go:298] selected driver: docker
	I1205 19:35:04.122657   13206 start.go:902] validating driver "docker" against &{Name:download-only-428164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-428164 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:04.122799   13206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:04.171271   13206 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-05 19:35:04.163444057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:35:04.171922   13206 cni.go:84] Creating CNI manager for ""
	I1205 19:35:04.171940   13206 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:04.171960   13206 start_flags.go:323] config:
	{Name:download-only-428164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-428164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1205 19:35:04.174092   13206 out.go:97] Starting control plane node download-only-428164 in cluster download-only-428164
	I1205 19:35:04.174108   13206 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:04.175505   13206 out.go:97] Pulling base image ...
	I1205 19:35:04.175524   13206 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:04.175589   13206 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:04.190831   13206 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:04.190976   13206 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:04.190995   13206 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:04.191002   13206 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:04.191015   13206 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:04.196125   13206 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:04.196142   13206 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:04.196323   13206 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:04.198558   13206 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1205 19:35:04.198581   13206 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:04.225743   13206 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:26a42be529125e55182ed93a618b213b -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:07.937877   13206 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:07.937966   13206 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:08.754329   13206 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1205 19:35:08.754446   13206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/download-only-428164/config.json ...
	I1205 19:35:08.754652   13206 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:08.754823   13206 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17731-6088/.minikube/cache/linux/amd64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-428164"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-428164
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.31s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-383682 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-383682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-383682
--- PASS: TestDownloadOnlyKic (1.31s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-319231 --alsologtostderr --binary-mirror http://127.0.0.1:32971 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-319231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-319231
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (56.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-973282 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-973282 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.306296604s)
helpers_test.go:175: Cleaning up "offline-crio-973282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-973282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-973282: (2.924610403s)
--- PASS: TestOffline (56.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-030936
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-030936: exit status 85 (65.060046ms)

                                                
                                                
-- stdout --
	* Profile "addons-030936" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-030936"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-030936
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-030936: exit status 85 (66.887608ms)

                                                
                                                
-- stdout --
	* Profile "addons-030936" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-030936"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (131.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-030936 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-030936 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m11.772633962s)
--- PASS: TestAddons/Setup (131.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 13.394222ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hmgc4" [4f36e16b-74e5-4183-ae54-777afcc87dc9] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01623105s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9wsfw" [23d952c3-eba0-4788-b241-d477ed5081a1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011790852s
addons_test.go:339: (dbg) Run:  kubectl --context addons-030936 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-030936 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-030936 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.720145902s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 ip
2023/12/05 19:37:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 12.901931ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-8586h" [22718867-f984-4ef4-846c-45896c7a82bf] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016129954s
addons_test.go:414: (dbg) Run:  kubectl --context addons-030936 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.95s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.222202ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-cdtmt" [9203256c-9bc5-49b8-8ef1-47ca632955a8] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011795598s
addons_test.go:472: (dbg) Run:  kubectl --context addons-030936 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-030936 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.418718161s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (80.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 14.400525ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-030936 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-030936 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7bc8c519-c3e3-4ca7-b030-125b7b2b414b] Pending
helpers_test.go:344: "task-pv-pod" [7bc8c519-c3e3-4ca7-b030-125b7b2b414b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7bc8c519-c3e3-4ca7-b030-125b7b2b414b] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009152265s
addons_test.go:583: (dbg) Run:  kubectl --context addons-030936 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-030936 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-030936 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-030936 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-030936 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-030936 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-030936 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-030936 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0e1dcfa-36c6-4241-8ba0-f30992b1cad3] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0e1dcfa-36c6-4241-8ba0-f30992b1cad3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0e1dcfa-36c6-4241-8ba0-f30992b1cad3] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008977894s
addons_test.go:625: (dbg) Run:  kubectl --context addons-030936 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-030936 delete pod task-pv-pod-restore: (1.101555142s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-030936 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-030936 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-030936 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.528891812s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (80.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-030936 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-030936 --alsologtostderr -v=1: (1.367020493s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-gcvsv" [913b228e-e435-46a8-9a65-3a846d997726] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-gcvsv" [913b228e-e435-46a8-9a65-3a846d997726] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.010743862s
--- PASS: TestAddons/parallel/Headlamp (12.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-6zz5n" [019f8b98-a983-4d91-a71f-2cbcb3f87229] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008125303s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-030936
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-030936 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-030936 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-030936 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1848cae1-a965-4094-8019-f81b6b8985e7] Pending
helpers_test.go:344: "test-local-path" [1848cae1-a965-4094-8019-f81b6b8985e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1848cae1-a965-4094-8019-f81b6b8985e7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1848cae1-a965-4094-8019-f81b6b8985e7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.008187944s
addons_test.go:890: (dbg) Run:  kubectl --context addons-030936 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 ssh "cat /opt/local-path-provisioner/pvc-c0670ccc-a245-46b9-8552-084bf6aa50cf_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-030936 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-030936 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-030936 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-030936 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.243574988s)
--- PASS: TestAddons/parallel/LocalPath (51.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wnvvv" [78a4b26e-4608-4170-8a6a-de17b217468b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01218245s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-030936
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-030936 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-030936 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-030936
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-030936: (11.901149667s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-030936
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-030936
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-030936
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (33.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-712876 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-712876 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.917329888s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-712876 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-712876 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-712876 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-712876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-712876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-712876: (6.126539843s)
--- PASS: TestCertOptions (33.76s)

                                                
                                    
x
+
TestCertExpiration (227.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-126151 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-126151 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.300180582s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-126151 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-126151 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.204022632s)
helpers_test.go:175: Cleaning up "cert-expiration-126151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-126151
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-126151: (2.364090278s)
--- PASS: TestCertExpiration (227.87s)

                                                
                                    
x
+
TestForceSystemdFlag (36.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-071470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1205 20:12:27.449760   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-071470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.570380358s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-071470 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-071470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-071470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-071470: (3.489915203s)
--- PASS: TestForceSystemdFlag (36.40s)

                                                
                                    
x
+
TestForceSystemdEnv (38.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-327830 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-327830 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.952610479s)
helpers_test.go:175: Cleaning up "force-systemd-env-327830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-327830
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-327830: (2.621840779s)
--- PASS: TestForceSystemdEnv (38.57s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.56s)

                                                
                                    
x
+
TestErrorSpam/setup (22.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-039372 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-039372 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-039372 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-039372 --driver=docker  --container-runtime=crio: (22.602858998s)
--- PASS: TestErrorSpam/setup (22.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 stop: (1.227511383s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039372 --log_dir /tmp/nospam-039372 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17731-6088/.minikube/files/etc/test/nested/copy/12883/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481133 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1205 19:47:27.449621   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:27.455419   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:27.465696   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:27.485981   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:27.526256   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:27.606563   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:27.766900   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:28.087439   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:28.728312   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:30.008709   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:32.569455   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:37.689683   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 19:47:47.930891   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-481133 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.918667002s)
--- PASS: TestFunctional/serial/StartWithProxy (69.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481133 --alsologtostderr -v=8
E1205 19:48:08.411157   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-481133 --alsologtostderr -v=8: (32.958364525s)
functional_test.go:659: soft start took 32.959243479s for "functional-481133" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-481133 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 cache add registry.k8s.io/pause:3.3: (1.061794993s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-481133 /tmp/TestFunctionalserialCacheCmdcacheadd_local3568295886/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cache add minikube-local-cache-test:functional-481133
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cache delete minikube-local-cache-test:functional-481133
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-481133
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.41281ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 kubectl -- --context functional-481133 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-481133 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481133 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 19:48:49.372316   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-481133 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.443646125s)
functional_test.go:757: restart took 39.443773687s for "functional-481133" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-481133 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 logs: (1.348019886s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 logs --file /tmp/TestFunctionalserialLogsFileCmd4125139670/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 logs --file /tmp/TestFunctionalserialLogsFileCmd4125139670/001/logs.txt: (1.36031468s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-481133 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-481133
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-481133: exit status 115 (332.364212ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30573 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-481133 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 config get cpus: exit status 14 (87.764124ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 config get cpus: exit status 14 (88.722234ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-481133 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-481133 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 53099: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481133 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-481133 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (179.984218ms)

                                                
                                                
-- stdout --
	* [functional-481133] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:50:02.463948   52560 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:50:02.464078   52560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:50:02.464086   52560 out.go:309] Setting ErrFile to fd 2...
	I1205 19:50:02.464090   52560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:50:02.464320   52560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 19:50:02.464840   52560 out.go:303] Setting JSON to false
	I1205 19:50:02.466092   52560 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1954,"bootTime":1701803848,"procs":506,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:50:02.466156   52560 start.go:138] virtualization: kvm guest
	I1205 19:50:02.468673   52560 out.go:177] * [functional-481133] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:50:02.471364   52560 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:50:02.472807   52560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:50:02.471420   52560 notify.go:220] Checking for updates...
	I1205 19:50:02.475596   52560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:50:02.477075   52560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:50:02.478478   52560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:50:02.479957   52560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:50:02.481889   52560 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:50:02.482640   52560 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:50:02.509687   52560 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:50:02.509838   52560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:50:02.565432   52560 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-05 19:50:02.556165794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:50:02.565522   52560 docker.go:295] overlay module found
	I1205 19:50:02.568290   52560 out.go:177] * Using the docker driver based on existing profile
	I1205 19:50:02.569708   52560 start.go:298] selected driver: docker
	I1205 19:50:02.569728   52560 start.go:902] validating driver "docker" against &{Name:functional-481133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-481133 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:50:02.569840   52560 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:50:02.572241   52560 out.go:177] 
	W1205 19:50:02.574012   52560 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 19:50:02.575433   52560 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481133 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481133 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-481133 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.530469ms)

                                                
                                                
-- stdout --
	* [functional-481133] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:50:01.545228   51910 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:50:01.545379   51910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:50:01.545391   51910 out.go:309] Setting ErrFile to fd 2...
	I1205 19:50:01.545398   51910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:50:01.545826   51910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 19:50:01.546389   51910 out.go:303] Setting JSON to false
	I1205 19:50:01.547506   51910 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1954,"bootTime":1701803848,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:50:01.547569   51910 start.go:138] virtualization: kvm guest
	I1205 19:50:01.549760   51910 out.go:177] * [functional-481133] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 19:50:01.551298   51910 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:50:01.551331   51910 notify.go:220] Checking for updates...
	I1205 19:50:01.552871   51910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:50:01.554289   51910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 19:50:01.555987   51910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 19:50:01.557329   51910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:50:01.558770   51910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:50:01.560794   51910 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:50:01.561562   51910 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:50:01.587998   51910 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:50:01.591410   51910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:50:01.682077   51910 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-05 19:50:01.669312381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 19:50:01.682159   51910 docker.go:295] overlay module found
	I1205 19:50:01.684285   51910 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1205 19:50:01.685673   51910 start.go:298] selected driver: docker
	I1205 19:50:01.685683   51910 start.go:902] validating driver "docker" against &{Name:functional-481133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-481133 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:50:01.685786   51910 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:50:01.687859   51910 out.go:177] 
	W1205 19:50:01.689235   51910 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 19:50:01.690813   51910 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-481133 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-481133 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-ttbwn" [b057c496-a812-4acc-8400-e32115505f81] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-ttbwn" [b057c496-a812-4acc-8400-e32115505f81] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.012451923s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30909
functional_test.go:1674: http://192.168.49.2:30909: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-ttbwn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30909
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1495cd4b-df70-419f-a4a3-5a82deb97595] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.072270772s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-481133 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-481133 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-481133 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-481133 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f7c23328-b30e-40cd-a6b9-c6b5b2ec773b] Pending
helpers_test.go:344: "sp-pod" [f7c23328-b30e-40cd-a6b9-c6b5b2ec773b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f7c23328-b30e-40cd-a6b9-c6b5b2ec773b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.012152025s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-481133 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-481133 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-481133 delete -f testdata/storage-provisioner/pod.yaml: (1.30820181s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-481133 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [323058e1-ebe0-450e-8d2e-cac258635a88] Pending
helpers_test.go:344: "sp-pod" [323058e1-ebe0-450e-8d2e-cac258635a88] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [323058e1-ebe0-450e-8d2e-cac258635a88] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011670348s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-481133 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh -n functional-481133 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 cp functional-481133:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2932979818/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh -n functional-481133 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-481133 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mx4qr" [0d6bca3b-b32c-4c20-9e7a-8904ea235427] Pending
helpers_test.go:344: "mysql-859648c796-mx4qr" [0d6bca3b-b32c-4c20-9e7a-8904ea235427] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mx4qr" [0d6bca3b-b32c-4c20-9e7a-8904ea235427] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.073525593s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;": exit status 1 (245.790627ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;": exit status 1 (162.779833ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;": exit status 1 (134.449218ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481133 exec mysql-859648c796-mx4qr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12883/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /etc/test/nested/copy/12883/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12883.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /etc/ssl/certs/12883.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12883.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /usr/share/ca-certificates/12883.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/128832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /etc/ssl/certs/128832.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/128832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /usr/share/ca-certificates/128832.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-481133 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh "sudo systemctl is-active docker": exit status 1 (268.779589ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh "sudo systemctl is-active containerd": exit status 1 (260.037986ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-481133 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-481133 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-481133 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 47195: os: process already finished
helpers_test.go:502: unable to terminate pid 46819: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-481133 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-481133 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-481133 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a5722f61-8ede-43dd-9258-4cc948050f2d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a5722f61-8ede-43dd-9258-4cc948050f2d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.017829147s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481133 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-481133
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481133 image ls --format short --alsologtostderr:
I1205 19:50:03.221449   53086 out.go:296] Setting OutFile to fd 1 ...
I1205 19:50:03.221666   53086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:03.221676   53086 out.go:309] Setting ErrFile to fd 2...
I1205 19:50:03.221681   53086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:03.221926   53086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
I1205 19:50:03.222519   53086 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:03.222636   53086 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:03.223073   53086 cli_runner.go:164] Run: docker container inspect functional-481133 --format={{.State.Status}}
I1205 19:50:03.245802   53086 ssh_runner.go:195] Run: systemctl --version
I1205 19:50:03.245864   53086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-481133
I1205 19:50:03.272610   53086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/functional-481133/id_rsa Username:docker}
I1205 19:50:03.369210   53086 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481133 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-481133  | ffd4cfbbe753e | 34.1MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/library/nginx                 | alpine             | 01e5c69afaf63 | 44.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481133 image ls --format table --alsologtostderr:
I1205 19:50:04.036153   53596 out.go:296] Setting OutFile to fd 1 ...
I1205 19:50:04.036512   53596 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:04.036528   53596 out.go:309] Setting ErrFile to fd 2...
I1205 19:50:04.036536   53596 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:04.036855   53596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
I1205 19:50:04.037728   53596 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:04.037891   53596 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:04.038502   53596 cli_runner.go:164] Run: docker container inspect functional-481133 --format={{.State.Status}}
I1205 19:50:04.058962   53596 ssh_runner.go:195] Run: systemctl --version
I1205 19:50:04.059007   53596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-481133
I1205 19:50:04.081069   53596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/functional-481133/id_rsa Username:docker}
I1205 19:50:04.180347   53596 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481133 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-481133"],"size":"34114467"
},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed36
2dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha25
6:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kind
est/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":["docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc","docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44421929"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"73deb9a3f702532592a4167455f
8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481133 image ls --format json --alsologtostderr:
I1205 19:50:03.770567   53442 out.go:296] Setting OutFile to fd 1 ...
I1205 19:50:03.770683   53442 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:03.770692   53442 out.go:309] Setting ErrFile to fd 2...
I1205 19:50:03.770696   53442 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:03.770890   53442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
I1205 19:50:03.772378   53442 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:03.772550   53442 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:03.773681   53442 cli_runner.go:164] Run: docker container inspect functional-481133 --format={{.State.Status}}
I1205 19:50:03.803153   53442 ssh_runner.go:195] Run: systemctl --version
I1205 19:50:03.803242   53442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-481133
I1205 19:50:03.821220   53442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/functional-481133/id_rsa Username:docker}
I1205 19:50:03.912484   53442 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481133 image ls --format yaml --alsologtostderr:
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-481133
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests:
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
- docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc
repoTags:
- docker.io/library/nginx:alpine
size: "44421929"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481133 image ls --format yaml --alsologtostderr:
I1205 19:50:03.478371   53247 out.go:296] Setting OutFile to fd 1 ...
I1205 19:50:03.478592   53247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:03.478636   53247 out.go:309] Setting ErrFile to fd 2...
I1205 19:50:03.478653   53247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:03.478987   53247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
I1205 19:50:03.479848   53247 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:03.479999   53247 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:03.480485   53247 cli_runner.go:164] Run: docker container inspect functional-481133 --format={{.State.Status}}
I1205 19:50:03.500237   53247 ssh_runner.go:195] Run: systemctl --version
I1205 19:50:03.500281   53247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-481133
I1205 19:50:03.521545   53247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/functional-481133/id_rsa Username:docker}
I1205 19:50:03.616273   53247 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh pgrep buildkitd: exit status 1 (288.537565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image build -t localhost/my-image:functional-481133 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image build -t localhost/my-image:functional-481133 testdata/build --alsologtostderr: (3.53411212s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481133 image build -t localhost/my-image:functional-481133 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1f8ef0eb7e0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-481133
--> f85bc6f2bc8
Successfully tagged localhost/my-image:functional-481133
f85bc6f2bc8e9b57ab9ee8739cfdadde32fdb60541d0b7bdf8b5655dc03b5f31
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481133 image build -t localhost/my-image:functional-481133 testdata/build --alsologtostderr:
I1205 19:50:04.025020   53588 out.go:296] Setting OutFile to fd 1 ...
I1205 19:50:04.025215   53588 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:04.025227   53588 out.go:309] Setting ErrFile to fd 2...
I1205 19:50:04.025232   53588 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:50:04.025413   53588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
I1205 19:50:04.026129   53588 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:04.026711   53588 config.go:182] Loaded profile config "functional-481133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:50:04.027225   53588 cli_runner.go:164] Run: docker container inspect functional-481133 --format={{.State.Status}}
I1205 19:50:04.051200   53588 ssh_runner.go:195] Run: systemctl --version
I1205 19:50:04.051256   53588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-481133
I1205 19:50:04.071085   53588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/functional-481133/id_rsa Username:docker}
I1205 19:50:04.164488   53588 build_images.go:151] Building image from path: /tmp/build.1638914326.tar
I1205 19:50:04.164556   53588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 19:50:04.173049   53588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1638914326.tar
I1205 19:50:04.176411   53588 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1638914326.tar: stat -c "%s %y" /var/lib/minikube/build/build.1638914326.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1638914326.tar': No such file or directory
I1205 19:50:04.176444   53588 ssh_runner.go:362] scp /tmp/build.1638914326.tar --> /var/lib/minikube/build/build.1638914326.tar (3072 bytes)
I1205 19:50:04.226744   53588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1638914326
I1205 19:50:04.236994   53588 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1638914326 -xf /var/lib/minikube/build/build.1638914326.tar
I1205 19:50:04.247433   53588 crio.go:297] Building image: /var/lib/minikube/build/build.1638914326
I1205 19:50:04.247509   53588 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-481133 /var/lib/minikube/build/build.1638914326 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 19:50:07.465877   53588 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-481133 /var/lib/minikube/build/build.1638914326 --cgroup-manager=cgroupfs: (3.218339133s)
I1205 19:50:07.465941   53588 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1638914326
I1205 19:50:07.474128   53588 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1638914326.tar
I1205 19:50:07.482298   53588 build_images.go:207] Built localhost/my-image:functional-481133 from /tmp/build.1638914326.tar
I1205 19:50:07.482333   53588 build_images.go:123] succeeded building to: functional-481133
I1205 19:50:07.482340   53588 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls
2023/12/05 19:50:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-481133
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image load --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image load --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr: (6.93037637s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-481133
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image load --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image load --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr: (4.339062221s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-481133 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.47.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-481133 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdany-port2124687226/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701805789028421335" to /tmp/TestFunctionalparallelMountCmdany-port2124687226/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701805789028421335" to /tmp/TestFunctionalparallelMountCmdany-port2124687226/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701805789028421335" to /tmp/TestFunctionalparallelMountCmdany-port2124687226/001/test-1701805789028421335
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.769152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 19:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 19:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 19:49 test-1701805789028421335
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh cat /mount-9p/test-1701805789028421335
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-481133 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [41d5be79-4e77-4773-82ac-1d08936ee80c] Pending
helpers_test.go:344: "busybox-mount" [41d5be79-4e77-4773-82ac-1d08936ee80c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [41d5be79-4e77-4773-82ac-1d08936ee80c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [41d5be79-4e77-4773-82ac-1d08936ee80c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.009430298s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-481133 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdany-port2124687226/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image save gcr.io/google-containers/addon-resizer:functional-481133 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image save gcr.io/google-containers/addon-resizer:functional-481133 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.575945782s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image rm gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.287253839s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-481133
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 image save --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 image save --daemon gcr.io/google-containers/addon-resizer:functional-481133 --alsologtostderr: (1.157970577s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-481133
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-481133 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-481133 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-g4cqq" [77c68296-b5b4-44d1-be9f-7d7becaf6f10] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-g4cqq" [77c68296-b5b4-44d1-be9f-7d7becaf6f10] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009829176s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdspecific-port2521182652/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.619554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdspecific-port2521182652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481133 ssh "sudo umount -f /mount-9p": exit status 1 (260.15441ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-481133 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdspecific-port2521182652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358355920/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358355920/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358355920/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-481133 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358355920/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358355920/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481133 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358355920/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "343.024583ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "72.727755ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "313.852463ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "67.210291ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-481133 service list -o json: (1.747948624s)
functional_test.go:1493: Took "1.748055935s" to run "out/minikube-linux-amd64 -p functional-481133 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31523
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-481133 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31523
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-481133
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-481133
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-481133
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-612238 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-612238 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m4.001867414s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (64.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons enable ingress --alsologtostderr -v=5: (11.300854992s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-612238 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-580909 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1205 19:54:38.893020   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:54:49.133317   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 19:55:09.613602   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-580909 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m10.584043372s)
--- PASS: TestJSONOutput/start/Command (70.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-580909 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-580909 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-580909 --output=json --user=testUser
E1205 19:55:50.575020   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-580909 --output=json --user=testUser: (5.734463262s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-043210 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-043210 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.753369ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed0ac62b-ddf6-4bc6-8524-921f42714841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-043210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c525131-51e4-403d-b97a-6e0ecec95e84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17731"}}
	{"specversion":"1.0","id":"a13a88d2-8837-418f-b08a-ed836fe05a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d5b4a80-7510-4b58-9695-5f7a566e8e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig"}}
	{"specversion":"1.0","id":"92e39295-266e-4f62-bf1b-a3701c6545b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube"}}
	{"specversion":"1.0","id":"d435bbf3-cf16-40c9-bd18-f1efbfb924a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"86bf89aa-de8e-4dd1-ac45-c53e9c5e7141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b05c4a4b-54de-4fc2-b570-6a120dc3c6d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-043210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-043210
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-009227 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-009227 --network=: (31.885434537s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-009227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-009227
E1205 19:56:29.231342   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.236647   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.246998   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.267322   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.307610   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.387955   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.548342   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:29.868962   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:30.509394   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-009227: (1.981858479s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.89s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-856645 --network=bridge
E1205 19:56:31.790462   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:34.352298   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:39.472520   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:56:49.713407   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-856645 --network=bridge: (22.519799439s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-856645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-856645
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-856645: (1.903220732s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.44s)

                                                
                                    
x
+
TestKicExistingNetwork (25.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-481810 --network=existing-network
E1205 19:57:10.194189   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 19:57:12.495410   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-481810 --network=existing-network: (23.090046492s)
helpers_test.go:175: Cleaning up "existing-network-481810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-481810
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-481810: (1.973588194s)
--- PASS: TestKicExistingNetwork (25.20s)

                                                
                                    
x
+
TestKicCustomSubnet (27.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-684218 --subnet=192.168.60.0/24
E1205 19:57:27.449327   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-684218 --subnet=192.168.60.0/24: (25.014742377s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-684218 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-684218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-684218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-684218: (2.088559861s)
--- PASS: TestKicCustomSubnet (27.12s)

                                                
                                    
x
+
TestKicStaticIP (27.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-758341 --static-ip=192.168.200.200
E1205 19:57:51.155591   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-758341 --static-ip=192.168.200.200: (25.388316043s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-758341 ip
helpers_test.go:175: Cleaning up "static-ip-758341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-758341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-758341: (2.079875381s)
--- PASS: TestKicStaticIP (27.61s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-870764 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-870764 --driver=docker  --container-runtime=crio: (24.010203132s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-873303 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-873303 --driver=docker  --container-runtime=crio: (24.555829182s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-870764
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-873303
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-873303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-873303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-873303: (1.854224694s)
helpers_test.go:175: Cleaning up "first-870764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-870764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-870764: (2.216064959s)
--- PASS: TestMinikubeProfile (53.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-581760 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1205 19:59:13.077088   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-581760 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.051440871s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-581760 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-596112 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-596112 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.066753811s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-596112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-581760 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-581760 --alsologtostderr -v=5: (1.640493402s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-596112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-596112
E1205 19:59:28.653169   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-596112: (1.215335264s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-596112
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-596112: (5.872460577s)
--- PASS: TestMountStart/serial/RestartStopped (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-596112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340918 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 19:59:56.336340   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340918 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m26.212010758s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-340918 -- rollout status deployment/busybox: (2.231039065s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-fcrbt -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-pl2b5 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-fcrbt -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-pl2b5 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-fcrbt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340918 -- exec busybox-5bc68d56bd-pl2b5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-340918 -v 3 --alsologtostderr
E1205 20:01:29.230747   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 20:01:56.918105   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-340918 -v 3 --alsologtostderr: (59.636762081s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-340918 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp testdata/cp-test.txt multinode-340918:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1254313263/001/cp-test_multinode-340918.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918:/home/docker/cp-test.txt multinode-340918-m02:/home/docker/cp-test_multinode-340918_multinode-340918-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m02 "sudo cat /home/docker/cp-test_multinode-340918_multinode-340918-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918:/home/docker/cp-test.txt multinode-340918-m03:/home/docker/cp-test_multinode-340918_multinode-340918-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m03 "sudo cat /home/docker/cp-test_multinode-340918_multinode-340918-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp testdata/cp-test.txt multinode-340918-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1254313263/001/cp-test_multinode-340918-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918-m02:/home/docker/cp-test.txt multinode-340918:/home/docker/cp-test_multinode-340918-m02_multinode-340918.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918 "sudo cat /home/docker/cp-test_multinode-340918-m02_multinode-340918.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918-m02:/home/docker/cp-test.txt multinode-340918-m03:/home/docker/cp-test_multinode-340918-m02_multinode-340918-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m03 "sudo cat /home/docker/cp-test_multinode-340918-m02_multinode-340918-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp testdata/cp-test.txt multinode-340918-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1254313263/001/cp-test_multinode-340918-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918-m03:/home/docker/cp-test.txt multinode-340918:/home/docker/cp-test_multinode-340918-m03_multinode-340918.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918 "sudo cat /home/docker/cp-test_multinode-340918-m03_multinode-340918.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 cp multinode-340918-m03:/home/docker/cp-test.txt multinode-340918-m02:/home/docker/cp-test_multinode-340918-m03_multinode-340918-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 ssh -n multinode-340918-m02 "sudo cat /home/docker/cp-test_multinode-340918-m03_multinode-340918-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-340918 node stop m03: (1.198614961s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340918 status: exit status 7 (471.942207ms)

                                                
                                                
-- stdout --
	multinode-340918
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-340918-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-340918-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr: exit status 7 (476.415206ms)

                                                
                                                
-- stdout --
	multinode-340918
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-340918-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-340918-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:02:23.584962  113490 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:02:23.585265  113490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:02:23.585277  113490 out.go:309] Setting ErrFile to fd 2...
	I1205 20:02:23.585284  113490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:02:23.585495  113490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 20:02:23.585714  113490 out.go:303] Setting JSON to false
	I1205 20:02:23.585759  113490 mustload.go:65] Loading cluster: multinode-340918
	I1205 20:02:23.585800  113490 notify.go:220] Checking for updates...
	I1205 20:02:23.586243  113490 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:02:23.586260  113490 status.go:255] checking status of multinode-340918 ...
	I1205 20:02:23.586687  113490 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 20:02:23.603477  113490 status.go:330] multinode-340918 host status = "Running" (err=<nil>)
	I1205 20:02:23.603508  113490 host.go:66] Checking if "multinode-340918" exists ...
	I1205 20:02:23.603874  113490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918
	I1205 20:02:23.619924  113490 host.go:66] Checking if "multinode-340918" exists ...
	I1205 20:02:23.620158  113490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:02:23.620240  113490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918
	I1205 20:02:23.636760  113490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918/id_rsa Username:docker}
	I1205 20:02:23.733087  113490 ssh_runner.go:195] Run: systemctl --version
	I1205 20:02:23.737060  113490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:02:23.747901  113490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:02:23.799819  113490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-05 20:02:23.791038936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:02:23.800403  113490 kubeconfig.go:92] found "multinode-340918" server: "https://192.168.58.2:8443"
	I1205 20:02:23.800426  113490 api_server.go:166] Checking apiserver status ...
	I1205 20:02:23.800457  113490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:02:23.810562  113490 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	I1205 20:02:23.819116  113490 api_server.go:182] apiserver freezer: "11:freezer:/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio/crio-230bd31d887ca1dc749c3d899cea394f5631e0688585a2ee83a5308bcb2c29e5"
	I1205 20:02:23.819475  113490 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/863592a132d83965efccebe87c95756e84f3f16e202315cf489fc372f87f4af7/crio/crio-230bd31d887ca1dc749c3d899cea394f5631e0688585a2ee83a5308bcb2c29e5/freezer.state
	I1205 20:02:23.828085  113490 api_server.go:204] freezer state: "THAWED"
	I1205 20:02:23.828116  113490 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1205 20:02:23.832082  113490 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1205 20:02:23.832113  113490 status.go:421] multinode-340918 apiserver status = Running (err=<nil>)
	I1205 20:02:23.832122  113490 status.go:257] multinode-340918 status: &{Name:multinode-340918 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:02:23.832139  113490 status.go:255] checking status of multinode-340918-m02 ...
	I1205 20:02:23.832499  113490 cli_runner.go:164] Run: docker container inspect multinode-340918-m02 --format={{.State.Status}}
	I1205 20:02:23.849086  113490 status.go:330] multinode-340918-m02 host status = "Running" (err=<nil>)
	I1205 20:02:23.849122  113490 host.go:66] Checking if "multinode-340918-m02" exists ...
	I1205 20:02:23.849433  113490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-340918-m02
	I1205 20:02:23.866184  113490 host.go:66] Checking if "multinode-340918-m02" exists ...
	I1205 20:02:23.866414  113490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:02:23.866448  113490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-340918-m02
	I1205 20:02:23.883057  113490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-6088/.minikube/machines/multinode-340918-m02/id_rsa Username:docker}
	I1205 20:02:23.973037  113490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:02:23.983745  113490 status.go:257] multinode-340918-m02 status: &{Name:multinode-340918-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:02:23.983795  113490 status.go:255] checking status of multinode-340918-m03 ...
	I1205 20:02:23.984079  113490 cli_runner.go:164] Run: docker container inspect multinode-340918-m03 --format={{.State.Status}}
	I1205 20:02:24.000520  113490 status.go:330] multinode-340918-m03 host status = "Stopped" (err=<nil>)
	I1205 20:02:24.000544  113490 status.go:343] host is not running, skipping remaining checks
	I1205 20:02:24.000550  113490 status.go:257] multinode-340918-m03 status: &{Name:multinode-340918-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 node start m03 --alsologtostderr
E1205 20:02:27.449372   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-340918 node start m03 --alsologtostderr: (10.325694587s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-340918
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-340918
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-340918: (24.829376343s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340918 --wait=true -v=8 --alsologtostderr
E1205 20:03:50.494271   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340918 --wait=true -v=8 --alsologtostderr: (1m26.572602313s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-340918
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 node delete m03
E1205 20:04:28.653422   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-340918 node delete m03: (4.108697973s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-340918 stop: (23.659771821s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340918 status: exit status 7 (92.547928ms)

                                                
                                                
-- stdout --
	multinode-340918
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-340918-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr: exit status 7 (93.08097ms)

                                                
                                                
-- stdout --
	multinode-340918
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-340918-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:04:55.054323  123612 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:04:55.054438  123612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:04:55.054442  123612 out.go:309] Setting ErrFile to fd 2...
	I1205 20:04:55.054447  123612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:04:55.054658  123612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 20:04:55.054823  123612 out.go:303] Setting JSON to false
	I1205 20:04:55.054854  123612 mustload.go:65] Loading cluster: multinode-340918
	I1205 20:04:55.054989  123612 notify.go:220] Checking for updates...
	I1205 20:04:55.055283  123612 config.go:182] Loaded profile config "multinode-340918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:55.055303  123612 status.go:255] checking status of multinode-340918 ...
	I1205 20:04:55.055756  123612 cli_runner.go:164] Run: docker container inspect multinode-340918 --format={{.State.Status}}
	I1205 20:04:55.073716  123612 status.go:330] multinode-340918 host status = "Stopped" (err=<nil>)
	I1205 20:04:55.073742  123612 status.go:343] host is not running, skipping remaining checks
	I1205 20:04:55.073748  123612 status.go:257] multinode-340918 status: &{Name:multinode-340918 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:04:55.073777  123612 status.go:255] checking status of multinode-340918-m02 ...
	I1205 20:04:55.074020  123612 cli_runner.go:164] Run: docker container inspect multinode-340918-m02 --format={{.State.Status}}
	I1205 20:04:55.091193  123612 status.go:330] multinode-340918-m02 host status = "Stopped" (err=<nil>)
	I1205 20:04:55.091215  123612 status.go:343] host is not running, skipping remaining checks
	I1205 20:04:55.091221  123612 status.go:257] multinode-340918-m02 status: &{Name:multinode-340918-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (75.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340918 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340918 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m14.448478638s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340918 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (75.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-340918
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340918-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-340918-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.148585ms)

                                                
                                                
-- stdout --
	* [multinode-340918-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-340918-m02' is duplicated with machine name 'multinode-340918-m02' in profile 'multinode-340918'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340918-m03 --driver=docker  --container-runtime=crio
E1205 20:06:29.231131   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340918-m03 --driver=docker  --container-runtime=crio: (24.953122368s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-340918
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-340918: exit status 80 (275.528857ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-340918
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-340918-m03 already exists in multinode-340918-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-340918-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-340918-m03: (1.894128554s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.26s)

                                                
                                    
x
+
TestPreload (147.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-619817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 20:07:27.449455   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-619817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m13.768375316s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-619817 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-619817
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-619817: (5.699278897s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-619817 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-619817 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m4.708783038s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-619817 image list
helpers_test.go:175: Cleaning up "test-preload-619817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-619817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-619817: (2.292236812s)
--- PASS: TestPreload (147.68s)

                                                
                                    
x
+
TestScheduledStopUnix (99.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-010694 --memory=2048 --driver=docker  --container-runtime=crio
E1205 20:09:28.653477   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-010694 --memory=2048 --driver=docker  --container-runtime=crio: (23.550302557s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010694 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-010694 -n scheduled-stop-010694
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010694 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010694 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-010694 -n scheduled-stop-010694
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-010694
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010694 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-010694
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-010694: exit status 7 (81.556641ms)

                                                
                                                
-- stdout --
	scheduled-stop-010694
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-010694 -n scheduled-stop-010694
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-010694 -n scheduled-stop-010694: exit status 7 (77.623159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-010694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-010694
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-010694: (4.201763348s)
--- PASS: TestScheduledStopUnix (99.25s)

                                                
                                    
x
+
TestInsufficientStorage (10.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-095485 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E1205 20:10:51.697518   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-095485 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.159374584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce7c5f5f-19d2-4bdc-820e-44f00ec9821f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-095485] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0963d9c2-2405-4c29-92d0-b499749aef71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17731"}}
	{"specversion":"1.0","id":"d5332ba4-9152-487f-9ea2-36d607c75ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74a949c2-d8e3-4a59-9c06-d94d535433ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig"}}
	{"specversion":"1.0","id":"3e6852f0-7b69-415d-a2f6-5497abc479fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube"}}
	{"specversion":"1.0","id":"6a755101-7815-4837-aa64-814cbbc0dbdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ef370dfc-7294-459b-b0d2-708feced9c02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80a26bb3-8c1d-4367-a97c-a6999f7de230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1b563f88-8e62-4175-9fd5-acede48bc6a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5284de2d-97c4-4c80-af0b-2b47f7d27013","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfa1cd20-a2d8-4f93-80b3-15a49f725b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"99fc6a0f-e9ea-4cc5-ba01-a85027a4e11b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-095485 in cluster insufficient-storage-095485","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4fca8fe3-30f9-4162-a3a3-d20cbfdcaeb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7529b52-0f82-4a9c-8a3d-afaa356d82cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1677794e-9cd4-42be-874e-36bd48a683ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-095485 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-095485 --output=json --layout=cluster: exit status 7 (270.179701ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-095485","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-095485","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:10:58.417045  145523 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-095485" does not appear in /home/jenkins/minikube-integration/17731-6088/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-095485 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-095485 --output=json --layout=cluster: exit status 7 (269.092401ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-095485","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-095485","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:10:58.686316  145610 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-095485" does not appear in /home/jenkins/minikube-integration/17731-6088/kubeconfig
	E1205 20:10:58.695623  145610 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/insufficient-storage-095485/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-095485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-095485
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-095485: (1.851224333s)
--- PASS: TestInsufficientStorage (10.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 20:12:52.278697   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.017728932s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-176085
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-176085: (2.250376697s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-176085 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-176085 status --format={{.Host}}: exit status 7 (93.61009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.48496s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-176085 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (112.503519ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-176085] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-176085
	    minikube start -p kubernetes-upgrade-176085 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1760852 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-176085 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176085 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.175110336s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-176085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-176085
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-176085: (2.202959683s)
--- PASS: TestKubernetesUpgrade (354.40s)

                                                
                                    
x
+
TestMissingContainerUpgrade (159.2s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.570916833.exe start -p missing-upgrade-991525 --memory=2200 --driver=docker  --container-runtime=crio
E1205 20:11:29.231164   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.570916833.exe start -p missing-upgrade-991525 --memory=2200 --driver=docker  --container-runtime=crio: (1m32.915807986s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-991525
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-991525
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-991525 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-991525 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.62589283s)
helpers_test.go:175: Cleaning up "missing-upgrade-991525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-991525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-991525: (3.476727346s)
--- PASS: TestMissingContainerUpgrade (159.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000588 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-000588 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.942796ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-000588] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000588 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000588 --driver=docker  --container-runtime=crio: (36.129190603s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-000588 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000588 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000588 --no-kubernetes --driver=docker  --container-runtime=crio: (6.354945379s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-000588 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-000588 status -o json: exit status 2 (315.941346ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-000588","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-000588
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-000588: (2.043692813s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000588 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000588 --no-kubernetes --driver=docker  --container-runtime=crio: (10.238191641s)
--- PASS: TestNoKubernetes/serial/Start (10.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-000588 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-000588 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.446599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-000588
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-000588: (1.443241677s)
--- PASS: TestNoKubernetes/serial/Stop (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000588 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000588 --driver=docker  --container-runtime=crio: (7.799258596s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-492071 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-492071 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (193.485146ms)

                                                
                                                
-- stdout --
	* [false-492071] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:12:01.606474  162010 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:12:01.607461  162010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:12:01.607475  162010 out.go:309] Setting ErrFile to fd 2...
	I1205 20:12:01.607483  162010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:12:01.607847  162010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6088/.minikube/bin
	I1205 20:12:01.608586  162010 out.go:303] Setting JSON to false
	I1205 20:12:01.609880  162010 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3274,"bootTime":1701803848,"procs":426,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:12:01.609952  162010 start.go:138] virtualization: kvm guest
	I1205 20:12:01.612442  162010 out.go:177] * [false-492071] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:12:01.614068  162010 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:12:01.614204  162010 notify.go:220] Checking for updates...
	I1205 20:12:01.615578  162010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:12:01.617879  162010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6088/kubeconfig
	I1205 20:12:01.619813  162010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6088/.minikube
	I1205 20:12:01.621319  162010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:12:01.622768  162010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:12:01.624746  162010 config.go:182] Loaded profile config "NoKubernetes-000588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1205 20:12:01.624863  162010 config.go:182] Loaded profile config "missing-upgrade-991525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1205 20:12:01.624945  162010 config.go:182] Loaded profile config "running-upgrade-032685": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1205 20:12:01.625047  162010 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:12:01.652550  162010 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:12:01.652690  162010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:12:01.715756  162010 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:110 SystemTime:2023-12-05 20:12:01.706559572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:12:01.715883  162010 docker.go:295] overlay module found
	I1205 20:12:01.718140  162010 out.go:177] * Using the docker driver based on user configuration
	I1205 20:12:01.719603  162010 start.go:298] selected driver: docker
	I1205 20:12:01.719627  162010 start.go:902] validating driver "docker" against <nil>
	I1205 20:12:01.719637  162010 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:12:01.722243  162010 out.go:177] 
	W1205 20:12:01.723875  162010 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 20:12:01.725343  162010 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-492071 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-492071" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt
server: https://127.0.0.1:32926
name: missing-upgrade-991525
contexts:
- context:
cluster: missing-upgrade-991525
user: missing-upgrade-991525
name: missing-upgrade-991525
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-991525
user:
client-certificate: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/missing-upgrade-991525/client.crt
client-key: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/missing-upgrade-991525/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-492071

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492071"

                                                
                                                
----------------------- debugLogs end: false-492071 [took: 3.808880382s] --------------------------------
helpers_test.go:175: Cleaning up "false-492071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-492071
--- PASS: TestNetworkPlugins/group/false (4.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-000588 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-000588 "sudo systemctl is-active --quiet service kubelet": exit status 1 (368.317496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestPause/serial/Start (42.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-354781 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-354781 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.962809048s)
--- PASS: TestPause/serial/Start (42.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-354781 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-354781 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.817542371s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (61.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-519106
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.121526416s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-354781 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-354781 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-354781 --output=json --layout=cluster: exit status 2 (303.245019ms)

                                                
                                                
-- stdout --
	{"Name":"pause-354781","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-354781","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-354781 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-354781 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.6s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-354781 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-354781 --alsologtostderr -v=5: (2.603448282s)
--- PASS: TestPause/serial/DeletePaused (2.60s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-354781
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-354781: exit status 1 (16.226222ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-354781: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.087939186s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q8mgb" [d903716d-38d6-4a0c-a13d-604dd8c40b12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q8mgb" [d903716d-38d6-4a0c-a13d-604dd8c40b12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010842186s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.231290879s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1205 20:16:29.230528   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.619315863s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ngp5z" [5093700c-18b0-4beb-8d31-4652f39952e6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020370247s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8xljd" [3967cc1c-7169-4be2-a611-455044d69afe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8xljd" [3967cc1c-7169-4be2-a611-455044d69afe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.010048605s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8x7j8" [4f7e1df4-a75a-4f2e-8f4e-bcae3f782ac6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.020234307s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.175396658s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xnszs" [18cab88d-3a36-47e6-8726-d25e787f9c72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xnszs" [18cab88d-3a36-47e6-8726-d25e787f9c72] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.015530257s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b49pw" [913ad5db-4da1-47ec-96ae-4446bfa53366] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 20:17:27.449325   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-b49pw" [913ad5db-4da1-47ec-96ae-4446bfa53366] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.085343079s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.804603653s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-492071 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.444883065s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bzlh4" [6da0058d-db6e-4ed3-aec4-c7a68e9be813] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bzlh4" [6da0058d-db6e-4ed3-aec4-c7a68e9be813] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.009740486s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (126.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-885078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-885078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m6.892897446s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (126.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g74r2" [246096bb-8cec-4f94-a46d-90baab8d56bf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019409296s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-771809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-771809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (1m5.052723173s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qfrm7" [8ebc347e-9315-4e4e-8ab3-29490c995a4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qfrm7" [8ebc347e-9315-4e4e-8ab3-29490c995a4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.01167881s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-492071 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-492071 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z7fq5" [cf831fe0-e865-4d98-8855-cd62c10629d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z7fq5" [cf831fe0-e865-4d98-8855-cd62c10629d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010292252s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-492071 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-492071 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1205 20:27:17.595151   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-084028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-084028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m11.410121814s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-687155 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-687155 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m8.846585912s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-771809 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bfcf4cc4-10f9-488b-9df1-0d2c53d02065] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bfcf4cc4-10f9-488b-9df1-0d2c53d02065] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.016069593s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-771809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-771809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-771809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-771809 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-771809 --alsologtostderr -v=3: (11.941614217s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-771809 -n no-preload-771809
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-771809 -n no-preload-771809: exit status 7 (81.29407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-771809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (343.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-771809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 20:20:30.494884   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 20:20:44.809989   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:44.815233   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:44.826121   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:44.846588   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:44.886956   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:44.967532   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:45.127918   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:45.448694   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:46.088896   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:20:47.369056   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-771809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (5m42.958254157s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-771809 -n no-preload-771809
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (343.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-084028 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [58ab8cfa-c678-4beb-a03b-c6a8730e620b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1205 20:20:49.929931   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
helpers_test.go:344: "busybox" [58ab8cfa-c678-4beb-a03b-c6a8730e620b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.017193447s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-084028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-885078 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d87a85b5-d079-4cc9-9010-b84dbc09b2ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d87a85b5-d079-4cc9-9010-b84dbc09b2ea] Running
E1205 20:20:55.050887   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.015085886s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-885078 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-084028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-084028 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-687155 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fac18cf7-d906-442e-894a-daa2916b6b15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fac18cf7-d906-442e-894a-daa2916b6b15] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.016918511s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-687155 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-084028 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-084028 --alsologtostderr -v=3: (11.991788506s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-885078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-885078 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-885078 --alsologtostderr -v=3
E1205 20:21:05.291708   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-885078 --alsologtostderr -v=3: (11.960027676s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-687155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-687155 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-687155 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-687155 --alsologtostderr -v=3: (13.313227999s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084028 -n embed-certs-084028
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084028 -n embed-certs-084028: exit status 7 (77.922046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-084028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (341.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-084028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-084028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m40.886501865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084028 -n embed-certs-084028
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (341.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-885078 -n old-k8s-version-885078
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-885078 -n old-k8s-version-885078: exit status 7 (84.067491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-885078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (438.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-885078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-885078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m18.683160749s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-885078 -n old-k8s-version-885078
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (438.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155: exit status 7 (92.418124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-687155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-687155 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1205 20:21:25.772101   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:21:29.230461   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/ingress-addon-legacy-612238/client.crt: no such file or directory
E1205 20:21:41.453875   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:41.459197   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:41.469477   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:41.489739   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:41.530343   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:41.610661   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:41.771299   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:42.091800   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:42.732028   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:44.012956   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:46.573102   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:21:51.693365   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:22:01.933802   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:22:06.732927   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:22:17.595321   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:17.600601   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:17.610910   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:17.631215   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:17.671481   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:17.751826   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:17.912293   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:18.232716   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:18.872848   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:20.153248   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:22.414025   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:22:22.713383   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:23.299071   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.304359   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.314616   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.334942   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.375180   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.455501   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.615896   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:23.936895   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:24.577427   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:25.858596   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:27.449811   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/addons-030936/client.crt: no such file or directory
E1205 20:22:27.834431   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:28.419709   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:33.540258   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:38.074728   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:22:43.781229   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:22:58.554978   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:23:03.375141   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:23:04.261795   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:23:28.654110   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:23:29.495327   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:29.500582   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:29.510833   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:29.531124   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:29.571405   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:29.651706   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:29.811899   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:30.132534   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:30.772736   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:32.052962   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:34.613754   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:39.515363   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:23:39.734769   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:45.222820   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:23:49.975371   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:23:58.282455   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.287692   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.297916   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.318165   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.358457   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.438887   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.599278   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:58.920061   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:23:59.560911   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:24:00.841544   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:24:03.401697   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:24:08.522871   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:24:10.456503   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:24:18.420165   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.425452   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.435703   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.455963   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.496283   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.576543   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.736852   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:18.764075   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:24:19.057595   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:19.697860   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:20.978398   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:23.539070   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:25.296013   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:24:28.652714   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/functional-481133/client.crt: no such file or directory
E1205 20:24:28.659894   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:38.900750   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:24:39.244388   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:24:51.417409   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
E1205 20:24:59.381630   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:25:01.436465   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/calico-492071/client.crt: no such file or directory
E1205 20:25:07.143998   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/custom-flannel-492071/client.crt: no such file or directory
E1205 20:25:20.205131   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
E1205 20:25:40.342853   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
E1205 20:25:44.809489   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-687155 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m36.319489089s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmxlh" [23e4ec17-d21a-4238-acd8-abe5c09bda55] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1205 20:26:12.494861   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/auto-492071/client.crt: no such file or directory
E1205 20:26:13.337985   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/enable-default-cni-492071/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmxlh" [23e4ec17-d21a-4238-acd8-abe5c09bda55] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.019086371s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmxlh" [23e4ec17-d21a-4238-acd8-abe5c09bda55] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010112485s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-771809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-771809 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-771809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-771809 -n no-preload-771809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-771809 -n no-preload-771809: exit status 2 (315.874313ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-771809 -n no-preload-771809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-771809 -n no-preload-771809: exit status 2 (342.743241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-771809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-771809 -n no-preload-771809
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-771809 -n no-preload-771809
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-887865 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 20:26:41.453294   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
E1205 20:26:42.125360   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/flannel-492071/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-887865 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (37.611621806s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vwf2c" [404e345c-a8b9-419e-b1af-3a426ae1c2d2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vwf2c" [404e345c-a8b9-419e-b1af-3a426ae1c2d2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.029211737s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rwjx5" [970bc6a2-da65-4b1c-a87c-8ac6b69b485d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rwjx5" [970bc6a2-da65-4b1c-a87c-8ac6b69b485d] Running
E1205 20:27:02.263391   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/bridge-492071/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.01968061s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vwf2c" [404e345c-a8b9-419e-b1af-3a426ae1c2d2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009914625s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-084028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rwjx5" [970bc6a2-da65-4b1c-a87c-8ac6b69b485d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012890384s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-687155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-887865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1205 20:27:09.136337   12883 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/kindnet-492071/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-887865 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-887865 --alsologtostderr -v=3: (3.027534112s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-084028 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-084028 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084028 -n embed-certs-084028
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084028 -n embed-certs-084028: exit status 2 (378.368753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-084028 -n embed-certs-084028
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-084028 -n embed-certs-084028: exit status 2 (466.013885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-084028 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-084028 --alsologtostderr -v=1: (1.048580561s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084028 -n embed-certs-084028
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-084028 -n embed-certs-084028
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-687155 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-887865 -n newest-cni-887865
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-887865 -n newest-cni-887865: exit status 7 (123.164247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-887865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-687155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-687155 --alsologtostderr -v=1: (1.168045745s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155: exit status 2 (439.17433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155: exit status 2 (420.365254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-687155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-687155 -n default-k8s-diff-port-687155
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-887865 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-887865 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (26.44215093s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-887865 -n newest-cni-887865
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-887865 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-887865 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-887865 -n newest-cni-887865
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-887865 -n newest-cni-887865: exit status 2 (299.829972ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-887865 -n newest-cni-887865
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-887865 -n newest-cni-887865: exit status 2 (305.796024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-887865 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-887865 -n newest-cni-887865
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-887865 -n newest-cni-887865
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6nds7" [3f838132-7b70-4aa8-b85d-faef819434ca] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01512972s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6nds7" [3f838132-7b70-4aa8-b85d-faef819434ca] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008581144s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-885078 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-885078 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-885078 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-885078 -n old-k8s-version-885078
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-885078 -n old-k8s-version-885078: exit status 2 (302.619724ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-885078 -n old-k8s-version-885078
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-885078 -n old-k8s-version-885078: exit status 2 (299.316115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-885078 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-885078 -n old-k8s-version-885078
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-885078 -n old-k8s-version-885078
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                    

Test skip (27/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-492071 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-492071" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt
server: https://127.0.0.1:32926
name: missing-upgrade-991525
contexts:
- context:
cluster: missing-upgrade-991525
user: missing-upgrade-991525
name: missing-upgrade-991525
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-991525
user:
client-certificate: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/missing-upgrade-991525/client.crt
client-key: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/missing-upgrade-991525/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-492071

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492071"

                                                
                                                
----------------------- debugLogs end: kubenet-492071 [took: 4.572580139s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-492071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-492071
--- SKIP: TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-492071 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-492071" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17731-6088/.minikube/ca.crt
server: https://127.0.0.1:32926
name: missing-upgrade-991525
contexts:
- context:
cluster: missing-upgrade-991525
user: missing-upgrade-991525
name: missing-upgrade-991525
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-991525
user:
client-certificate: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/missing-upgrade-991525/client.crt
client-key: /home/jenkins/minikube-integration/17731-6088/.minikube/profiles/missing-upgrade-991525/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-492071

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-492071" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492071"

                                                
                                                
----------------------- debugLogs end: cilium-492071 [took: 5.043860606s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-492071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-492071
--- SKIP: TestNetworkPlugins/group/cilium (5.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-001752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-001752
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard