Test Report: Docker_Linux_crio 17644

                    
                      406b3a49e2f2efe39684a1d536accd2e485fd514:2023-11-27:32048
                    
                

Test fail (7/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 156.53
83 TestFunctional/parallel/ConfigCmd 0.52
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.12
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.37
209 TestMultiNode/serial/PingHostFrom2Pods 3.53
230 TestRunningBinaryUpgrade 66.04
235 TestStoppedBinaryUpgrade/Upgrade 102.53
x
+
TestAddons/parallel/Ingress (156.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-112776 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-112776 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-112776 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [aeef38b3-1941-4fd8-9027-817daad736c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [aeef38b3-1941-4fd8-9027-817daad736c1] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.011376403s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-112776 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.45275077s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-112776 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-112776 addons disable ingress --alsologtostderr -v=1: (7.628110925s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-112776
helpers_test.go:235: (dbg) docker inspect addons-112776:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74",
	        "Created": "2023-11-27T11:17:34.811988206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 80750,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T11:17:35.15372903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74/ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74-json.log",
	        "Name": "/addons-112776",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-112776:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-112776",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a37ab729976a0c69c4b490c9b72699ed908855240e3a1918721adb737374bd5-init/diff:/var/lib/docker/overlay2/6890504cd609c764c809309abb3d72eb8ac39b0411e6657ccda2a2f23689cb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a37ab729976a0c69c4b490c9b72699ed908855240e3a1918721adb737374bd5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a37ab729976a0c69c4b490c9b72699ed908855240e3a1918721adb737374bd5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a37ab729976a0c69c4b490c9b72699ed908855240e3a1918721adb737374bd5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-112776",
	                "Source": "/var/lib/docker/volumes/addons-112776/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-112776",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-112776",
	                "name.minikube.sigs.k8s.io": "addons-112776",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bc2e1ba8fde8722e66df7c91ab05c65c5a272b997a38d6c23b7e04b7209674e8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bc2e1ba8fde8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-112776": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac8fd910f8ca",
	                        "addons-112776"
	                    ],
	                    "NetworkID": "fcb71eda4ed2478836e18f80680750de9eb637180d1ff1d3b552b76f0ea18e37",
	                    "EndpointID": "7c7f1f52d11d313313ed2d848cbde3b57c7ee6cab4cab1dbeab59a2713591341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-112776 -n addons-112776
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-112776 logs -n 25: (1.164313668s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-281039                                                                     | download-only-281039   | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC | 27 Nov 23 11:17 UTC |
	| delete  | -p download-only-281039                                                                     | download-only-281039   | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC | 27 Nov 23 11:17 UTC |
	| start   | --download-only -p                                                                          | download-docker-213707 | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC |                     |
	|         | download-docker-213707                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-213707                                                                   | download-docker-213707 | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC | 27 Nov 23 11:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-041707   | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC |                     |
	|         | binary-mirror-041707                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42111                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-041707                                                                     | binary-mirror-041707   | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC | 27 Nov 23 11:17 UTC |
	| addons  | enable dashboard -p                                                                         | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC |                     |
	|         | addons-112776                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC |                     |
	|         | addons-112776                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-112776 --wait=true                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC | 27 Nov 23 11:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:19 UTC |
	|         | -p addons-112776                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:19 UTC |
	|         | addons-112776                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:20 UTC |
	|         | addons-112776                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-112776 ssh cat                                                                       | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:19 UTC |
	|         | /opt/local-path-provisioner/pvc-dbf749c2-173c-47fe-82f9-107cdc643fe7_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-112776 addons disable                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:19 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-112776 ip                                                                            | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:19 UTC |
	| addons  | addons-112776 addons disable                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:19 UTC | 27 Nov 23 11:20 UTC |
	|         | -p addons-112776                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-112776 addons disable                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:20 UTC | 27 Nov 23 11:20 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-112776 addons                                                                        | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:20 UTC | 27 Nov 23 11:20 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-112776 ssh curl -s                                                                   | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:20 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-112776 addons                                                                        | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:20 UTC | 27 Nov 23 11:20 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-112776 addons                                                                        | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:20 UTC | 27 Nov 23 11:20 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-112776 ip                                                                            | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:22 UTC | 27 Nov 23 11:22 UTC |
	| addons  | addons-112776 addons disable                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:22 UTC | 27 Nov 23 11:22 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-112776 addons disable                                                                | addons-112776          | jenkins | v1.32.0 | 27 Nov 23 11:22 UTC | 27 Nov 23 11:22 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:17:10
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:17:10.625936   80068 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:17:10.626055   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:17:10.626067   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:17:10.626072   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:17:10.626290   80068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:17:10.626938   80068 out.go:303] Setting JSON to false
	I1127 11:17:10.627812   80068 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7184,"bootTime":1701076647,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:17:10.627878   80068 start.go:138] virtualization: kvm guest
	I1127 11:17:10.630179   80068 out.go:177] * [addons-112776] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:17:10.631872   80068 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:17:10.631929   80068 notify.go:220] Checking for updates...
	I1127 11:17:10.633493   80068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:17:10.635109   80068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:17:10.636583   80068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:17:10.637923   80068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:17:10.639255   80068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:17:10.641041   80068 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:17:10.661950   80068 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:17:10.662068   80068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:17:10.716735   80068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-11-27 11:17:10.708057048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:17:10.716882   80068 docker.go:295] overlay module found
	I1127 11:17:10.719051   80068 out.go:177] * Using the docker driver based on user configuration
	I1127 11:17:10.720899   80068 start.go:298] selected driver: docker
	I1127 11:17:10.720924   80068 start.go:902] validating driver "docker" against <nil>
	I1127 11:17:10.720937   80068 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:17:10.721797   80068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:17:10.772561   80068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-11-27 11:17:10.764621523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:17:10.772754   80068 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 11:17:10.773015   80068 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 11:17:10.774770   80068 out.go:177] * Using Docker driver with root privileges
	I1127 11:17:10.776555   80068 cni.go:84] Creating CNI manager for ""
	I1127 11:17:10.776574   80068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:17:10.776590   80068 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 11:17:10.776606   80068 start_flags.go:323] config:
	{Name:addons-112776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-112776 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:17:10.778478   80068 out.go:177] * Starting control plane node addons-112776 in cluster addons-112776
	I1127 11:17:10.779977   80068 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:17:10.781493   80068 out.go:177] * Pulling base image ...
	I1127 11:17:10.783169   80068 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:17:10.783197   80068 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:17:10.783247   80068 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 11:17:10.783260   80068 cache.go:56] Caching tarball of preloaded images
	I1127 11:17:10.783362   80068 preload.go:174] Found /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 11:17:10.783373   80068 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 11:17:10.783806   80068 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/config.json ...
	I1127 11:17:10.783836   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/config.json: {Name:mkc981752b6624c58af5ac514d1d2e8acaeecee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:10.798400   80068 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 11:17:10.798521   80068 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 11:17:10.798537   80068 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 11:17:10.798541   80068 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 11:17:10.798551   80068 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 11:17:10.798557   80068 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from local cache
	I1127 11:17:21.926876   80068 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from cached tarball
	I1127 11:17:21.926922   80068 cache.go:194] Successfully downloaded all kic artifacts
	I1127 11:17:21.926981   80068 start.go:365] acquiring machines lock for addons-112776: {Name:mk413b0fb3cde3ff8979311a0a8705fce9b661c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:17:21.927123   80068 start.go:369] acquired machines lock for "addons-112776" in 117.08µs
	I1127 11:17:21.927167   80068 start.go:93] Provisioning new machine with config: &{Name:addons-112776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-112776 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 11:17:21.927261   80068 start.go:125] createHost starting for "" (driver="docker")
	I1127 11:17:21.929165   80068 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1127 11:17:21.929414   80068 start.go:159] libmachine.API.Create for "addons-112776" (driver="docker")
	I1127 11:17:21.929470   80068 client.go:168] LocalClient.Create starting
	I1127 11:17:21.929617   80068 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem
	I1127 11:17:22.107687   80068 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem
	I1127 11:17:22.272588   80068 cli_runner.go:164] Run: docker network inspect addons-112776 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 11:17:22.287972   80068 cli_runner.go:211] docker network inspect addons-112776 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 11:17:22.288050   80068 network_create.go:281] running [docker network inspect addons-112776] to gather additional debugging logs...
	I1127 11:17:22.288073   80068 cli_runner.go:164] Run: docker network inspect addons-112776
	W1127 11:17:22.302209   80068 cli_runner.go:211] docker network inspect addons-112776 returned with exit code 1
	I1127 11:17:22.302238   80068 network_create.go:284] error running [docker network inspect addons-112776]: docker network inspect addons-112776: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-112776 not found
	I1127 11:17:22.302254   80068 network_create.go:286] output of [docker network inspect addons-112776]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-112776 not found
	
	** /stderr **
	I1127 11:17:22.302359   80068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:17:22.317222   80068 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002ec61e0}
	I1127 11:17:22.317264   80068 network_create.go:124] attempt to create docker network addons-112776 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 11:17:22.317309   80068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-112776 addons-112776
	I1127 11:17:22.366264   80068 network_create.go:108] docker network addons-112776 192.168.49.0/24 created
	I1127 11:17:22.366303   80068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-112776" container
	I1127 11:17:22.366407   80068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 11:17:22.380517   80068 cli_runner.go:164] Run: docker volume create addons-112776 --label name.minikube.sigs.k8s.io=addons-112776 --label created_by.minikube.sigs.k8s.io=true
	I1127 11:17:22.396653   80068 oci.go:103] Successfully created a docker volume addons-112776
	I1127 11:17:22.396749   80068 cli_runner.go:164] Run: docker run --rm --name addons-112776-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-112776 --entrypoint /usr/bin/test -v addons-112776:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 11:17:29.627419   80068 cli_runner.go:217] Completed: docker run --rm --name addons-112776-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-112776 --entrypoint /usr/bin/test -v addons-112776:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (7.230619849s)
	I1127 11:17:29.627457   80068 oci.go:107] Successfully prepared a docker volume addons-112776
	I1127 11:17:29.627518   80068 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:17:29.627552   80068 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 11:17:29.627606   80068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-112776:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 11:17:34.745066   80068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-112776:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.117422637s)
	I1127 11:17:34.745102   80068 kic.go:203] duration metric: took 5.117545 seconds to extract preloaded images to volume
	W1127 11:17:34.745265   80068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 11:17:34.745384   80068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 11:17:34.797240   80068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-112776 --name addons-112776 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-112776 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-112776 --network addons-112776 --ip 192.168.49.2 --volume addons-112776:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 11:17:35.161482   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Running}}
	I1127 11:17:35.179794   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:17:35.198229   80068 cli_runner.go:164] Run: docker exec addons-112776 stat /var/lib/dpkg/alternatives/iptables
	I1127 11:17:35.239279   80068 oci.go:144] the created container "addons-112776" has a running status.
	I1127 11:17:35.239311   80068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa...
	I1127 11:17:35.294108   80068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 11:17:35.314958   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:17:35.332636   80068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 11:17:35.332661   80068 kic_runner.go:114] Args: [docker exec --privileged addons-112776 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 11:17:35.401463   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:17:35.420023   80068 machine.go:88] provisioning docker machine ...
	I1127 11:17:35.420082   80068 ubuntu.go:169] provisioning hostname "addons-112776"
	I1127 11:17:35.420175   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:35.443468   80068 main.go:141] libmachine: Using SSH client type: native
	I1127 11:17:35.443956   80068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 11:17:35.443982   80068 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-112776 && echo "addons-112776" | sudo tee /etc/hostname
	I1127 11:17:35.445350   80068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47778->127.0.0.1:32772: read: connection reset by peer
	I1127 11:17:38.579402   80068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-112776
	
	I1127 11:17:38.579475   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:38.597495   80068 main.go:141] libmachine: Using SSH client type: native
	I1127 11:17:38.597854   80068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 11:17:38.597873   80068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-112776' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-112776/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-112776' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:17:38.728134   80068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:17:38.728170   80068 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17644-72381/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-72381/.minikube}
	I1127 11:17:38.728197   80068 ubuntu.go:177] setting up certificates
	I1127 11:17:38.728213   80068 provision.go:83] configureAuth start
	I1127 11:17:38.728289   80068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-112776
	I1127 11:17:38.746033   80068 provision.go:138] copyHostCerts
	I1127 11:17:38.746167   80068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem (1082 bytes)
	I1127 11:17:38.746513   80068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem (1123 bytes)
	I1127 11:17:38.746653   80068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem (1675 bytes)
	I1127 11:17:38.746747   80068 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem org=jenkins.addons-112776 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-112776]
	I1127 11:17:38.918476   80068 provision.go:172] copyRemoteCerts
	I1127 11:17:38.918543   80068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:17:38.918589   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:38.934713   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:17:39.024348   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1127 11:17:39.047337   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1127 11:17:39.069009   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 11:17:39.090622   80068 provision.go:86] duration metric: configureAuth took 362.389524ms
	I1127 11:17:39.090659   80068 ubuntu.go:193] setting minikube options for container-runtime
	I1127 11:17:39.090831   80068 config.go:182] Loaded profile config "addons-112776": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:17:39.090941   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:39.107328   80068 main.go:141] libmachine: Using SSH client type: native
	I1127 11:17:39.107764   80068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 11:17:39.107791   80068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 11:17:39.320092   80068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 11:17:39.320121   80068 machine.go:91] provisioned docker machine in 3.900059011s
	I1127 11:17:39.320131   80068 client.go:171] LocalClient.Create took 17.39065117s
	I1127 11:17:39.320150   80068 start.go:167] duration metric: libmachine.API.Create for "addons-112776" took 17.390738492s
	I1127 11:17:39.320164   80068 start.go:300] post-start starting for "addons-112776" (driver="docker")
	I1127 11:17:39.320180   80068 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:17:39.320254   80068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:17:39.320307   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:39.336920   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:17:39.428514   80068 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:17:39.431578   80068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 11:17:39.431643   80068 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 11:17:39.431685   80068 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 11:17:39.431700   80068 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 11:17:39.431719   80068 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/addons for local assets ...
	I1127 11:17:39.431783   80068 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/files for local assets ...
	I1127 11:17:39.431812   80068 start.go:303] post-start completed in 111.637349ms
	I1127 11:17:39.432133   80068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-112776
	I1127 11:17:39.448910   80068 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/config.json ...
	I1127 11:17:39.449196   80068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:17:39.449256   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:39.465966   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:17:39.556478   80068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 11:17:39.560566   80068 start.go:128] duration metric: createHost completed in 17.633283876s
	I1127 11:17:39.560594   80068 start.go:83] releasing machines lock for "addons-112776", held for 17.633456975s
	I1127 11:17:39.560667   80068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-112776
	I1127 11:17:39.578139   80068 ssh_runner.go:195] Run: cat /version.json
	I1127 11:17:39.578183   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:39.578259   80068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:17:39.578325   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:17:39.595186   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:17:39.596557   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:17:39.767704   80068 ssh_runner.go:195] Run: systemctl --version
	I1127 11:17:39.771886   80068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 11:17:39.909700   80068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 11:17:39.913931   80068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:17:39.932200   80068 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 11:17:39.932311   80068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:17:39.959153   80068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 11:17:39.959181   80068 start.go:472] detecting cgroup driver to use...
	I1127 11:17:39.959225   80068 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 11:17:39.959270   80068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:17:39.973580   80068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:17:39.983745   80068 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:17:39.983807   80068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:17:39.996202   80068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:17:40.009659   80068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 11:17:40.088950   80068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:17:40.164763   80068 docker.go:219] disabling docker service ...
	I1127 11:17:40.164821   80068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:17:40.182694   80068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:17:40.193250   80068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:17:40.268673   80068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:17:40.348868   80068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:17:40.359337   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:17:40.374778   80068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 11:17:40.374837   80068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:17:40.384171   80068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 11:17:40.384236   80068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:17:40.393598   80068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:17:40.402505   80068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:17:40.411833   80068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:17:40.420571   80068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:17:40.428642   80068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:17:40.436449   80068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:17:40.511847   80068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 11:17:40.819489   80068 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 11:17:40.819572   80068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 11:17:40.822998   80068 start.go:540] Will wait 60s for crictl version
	I1127 11:17:40.823053   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:17:40.826099   80068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 11:17:40.860004   80068 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 11:17:40.860099   80068 ssh_runner.go:195] Run: crio --version
	I1127 11:17:40.895541   80068 ssh_runner.go:195] Run: crio --version
	I1127 11:17:40.932076   80068 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 11:17:40.933544   80068 cli_runner.go:164] Run: docker network inspect addons-112776 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:17:40.949914   80068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 11:17:40.953450   80068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:17:40.963492   80068 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:17:40.963544   80068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:17:41.019080   80068 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 11:17:41.019110   80068 crio.go:415] Images already preloaded, skipping extraction
	I1127 11:17:41.019173   80068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:17:41.051311   80068 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 11:17:41.051334   80068 cache_images.go:84] Images are preloaded, skipping loading
	I1127 11:17:41.051405   80068 ssh_runner.go:195] Run: crio config
	I1127 11:17:41.095300   80068 cni.go:84] Creating CNI manager for ""
	I1127 11:17:41.095326   80068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:17:41.095349   80068 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 11:17:41.095366   80068 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-112776 NodeName:addons-112776 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 11:17:41.095508   80068 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-112776"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 11:17:41.095581   80068 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-112776 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-112776 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 11:17:41.095632   80068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 11:17:41.104078   80068 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 11:17:41.104153   80068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 11:17:41.112354   80068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1127 11:17:41.128598   80068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 11:17:41.145333   80068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1127 11:17:41.161468   80068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 11:17:41.164907   80068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:17:41.174955   80068 certs.go:56] Setting up /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776 for IP: 192.168.49.2
	I1127 11:17:41.175044   80068 certs.go:190] acquiring lock for shared ca certs: {Name:mk5858a15575801c48b8e08b34d7442dd346ca1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.175178   80068 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key
	I1127 11:17:41.415323   80068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt ...
	I1127 11:17:41.415370   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt: {Name:mkb610b906ea34cdea55abc1dc5589ec353b3e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.415549   80068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key ...
	I1127 11:17:41.415560   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key: {Name:mk5ddc57cd4b3803e6e150a84596098a2230baa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.415638   80068 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key
	I1127 11:17:41.546400   80068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt ...
	I1127 11:17:41.546435   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt: {Name:mkfe0733e960960d7eeab74f4c1047539ba13f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.546598   80068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key ...
	I1127 11:17:41.546608   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key: {Name:mk3e2607952b77372e72af768a185fcd3e64463f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.546718   80068 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.key
	I1127 11:17:41.546732   80068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt with IP's: []
	I1127 11:17:41.767834   80068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt ...
	I1127 11:17:41.767867   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: {Name:mk0b1381d1970ab07c49b7c67ea15a43e5eea891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.768036   80068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.key ...
	I1127 11:17:41.768046   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.key: {Name:mk74357b73ef94a7ca4c399d4b9a9482333efd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.768111   80068 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.key.dd3b5fb2
	I1127 11:17:41.768129   80068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 11:17:41.950407   80068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.crt.dd3b5fb2 ...
	I1127 11:17:41.950440   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.crt.dd3b5fb2: {Name:mk2b0d99dc94a8d61734f8a8b9c9887087f2d9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.950606   80068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.key.dd3b5fb2 ...
	I1127 11:17:41.950620   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.key.dd3b5fb2: {Name:mk4b35869e898c913ee3a811e3cf08985e7f7c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:41.950703   80068 certs.go:337] copying /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.crt
	I1127 11:17:41.950791   80068 certs.go:341] copying /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.key
	I1127 11:17:41.950841   80068 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.key
	I1127 11:17:41.950858   80068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.crt with IP's: []
	I1127 11:17:42.244053   80068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.crt ...
	I1127 11:17:42.244090   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.crt: {Name:mk3b6895774bd9408e4ad1372064fad6c3e0730f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:42.244272   80068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.key ...
	I1127 11:17:42.244283   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.key: {Name:mk7af1a5f72ea8a3cef899449cca06d0c97c61e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:17:42.244489   80068 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 11:17:42.244533   80068 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem (1082 bytes)
	I1127 11:17:42.244558   80068 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem (1123 bytes)
	I1127 11:17:42.244585   80068 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem (1675 bytes)
	I1127 11:17:42.245199   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 11:17:42.267869   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 11:17:42.290177   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 11:17:42.312305   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 11:17:42.334132   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 11:17:42.355915   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 11:17:42.377960   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 11:17:42.400317   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 11:17:42.422464   80068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 11:17:42.445318   80068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 11:17:42.461904   80068 ssh_runner.go:195] Run: openssl version
	I1127 11:17:42.467233   80068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 11:17:42.476159   80068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:17:42.479616   80068 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 11:17 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:17:42.479698   80068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:17:42.486115   80068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 11:17:42.495090   80068 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 11:17:42.498521   80068 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:17:42.498582   80068 kubeadm.go:404] StartCluster: {Name:addons-112776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-112776 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:17:42.498698   80068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 11:17:42.498774   80068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 11:17:42.532534   80068 cri.go:89] found id: ""
	I1127 11:17:42.532597   80068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 11:17:42.540680   80068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:17:42.548931   80068 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 11:17:42.548984   80068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:17:42.557154   80068 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:17:42.557207   80068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 11:17:42.640264   80068 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 11:17:42.703977   80068 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:17:51.951414   80068 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 11:17:51.951499   80068 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 11:17:51.951686   80068 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 11:17:51.951823   80068 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 11:17:51.951874   80068 kubeadm.go:322] OS: Linux
	I1127 11:17:51.951936   80068 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 11:17:51.952005   80068 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 11:17:51.952078   80068 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 11:17:51.952148   80068 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 11:17:51.952214   80068 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 11:17:51.952286   80068 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 11:17:51.952351   80068 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 11:17:51.952417   80068 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 11:17:51.952482   80068 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 11:17:51.952573   80068 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:17:51.952737   80068 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:17:51.952851   80068 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:17:51.952926   80068 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:17:51.954889   80068 out.go:204]   - Generating certificates and keys ...
	I1127 11:17:51.955032   80068 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 11:17:51.955121   80068 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 11:17:51.955210   80068 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 11:17:51.955283   80068 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 11:17:51.955362   80068 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 11:17:51.955424   80068 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 11:17:51.955492   80068 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 11:17:51.955642   80068 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-112776 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 11:17:51.955729   80068 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 11:17:51.955891   80068 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-112776 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 11:17:51.955971   80068 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 11:17:51.956044   80068 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 11:17:51.956134   80068 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 11:17:51.956220   80068 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:17:51.956290   80068 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:17:51.956356   80068 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:17:51.956494   80068 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:17:51.956600   80068 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:17:51.956739   80068 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:17:51.956830   80068 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:17:51.958559   80068 out.go:204]   - Booting up control plane ...
	I1127 11:17:51.958702   80068 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:17:51.958816   80068 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:17:51.958922   80068 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:17:51.959084   80068 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:17:51.959215   80068 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:17:51.959306   80068 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 11:17:51.959545   80068 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:17:51.959648   80068 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001949 seconds
	I1127 11:17:51.959822   80068 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:17:51.959987   80068 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:17:51.960084   80068 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:17:51.960292   80068 kubeadm.go:322] [mark-control-plane] Marking the node addons-112776 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 11:17:51.960387   80068 kubeadm.go:322] [bootstrap-token] Using token: kw1kgk.1s4h1fll8234mfro
	I1127 11:17:51.961736   80068 out.go:204]   - Configuring RBAC rules ...
	I1127 11:17:51.961890   80068 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:17:51.962025   80068 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 11:17:51.962231   80068 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:17:51.962392   80068 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:17:51.962541   80068 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:17:51.962680   80068 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:17:51.962828   80068 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 11:17:51.962885   80068 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 11:17:51.962950   80068 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 11:17:51.962959   80068 kubeadm.go:322] 
	I1127 11:17:51.963038   80068 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 11:17:51.963047   80068 kubeadm.go:322] 
	I1127 11:17:51.963142   80068 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 11:17:51.963153   80068 kubeadm.go:322] 
	I1127 11:17:51.963188   80068 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 11:17:51.963262   80068 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:17:51.963344   80068 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:17:51.963353   80068 kubeadm.go:322] 
	I1127 11:17:51.963420   80068 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 11:17:51.963430   80068 kubeadm.go:322] 
	I1127 11:17:51.963490   80068 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 11:17:51.963500   80068 kubeadm.go:322] 
	I1127 11:17:51.963562   80068 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 11:17:51.963660   80068 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:17:51.963777   80068 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:17:51.963787   80068 kubeadm.go:322] 
	I1127 11:17:51.963924   80068 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 11:17:51.964062   80068 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 11:17:51.964079   80068 kubeadm.go:322] 
	I1127 11:17:51.964193   80068 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kw1kgk.1s4h1fll8234mfro \
	I1127 11:17:51.964319   80068 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 \
	I1127 11:17:51.964361   80068 kubeadm.go:322] 	--control-plane 
	I1127 11:17:51.964371   80068 kubeadm.go:322] 
	I1127 11:17:51.964486   80068 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:17:51.964498   80068 kubeadm.go:322] 
	I1127 11:17:51.964619   80068 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kw1kgk.1s4h1fll8234mfro \
	I1127 11:17:51.964767   80068 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 
	I1127 11:17:51.964795   80068 cni.go:84] Creating CNI manager for ""
	I1127 11:17:51.964812   80068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:17:51.967681   80068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 11:17:51.969105   80068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 11:17:51.973010   80068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 11:17:51.973034   80068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 11:17:52.043594   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 11:17:52.737748   80068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:17:52.737817   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:52.737836   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f minikube.k8s.io/name=addons-112776 minikube.k8s.io/updated_at=2023_11_27T11_17_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:52.853256   80068 ops.go:34] apiserver oom_adj: -16
	I1127 11:17:52.853401   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:52.918590   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:53.490882   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:53.991076   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:54.490363   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:54.990931   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:55.491198   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:55.991201   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:56.490680   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:56.991275   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:57.490762   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:57.991015   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:58.490995   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:58.990577   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:59.490986   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:17:59.990555   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:00.490708   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:00.990335   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:01.490370   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:01.990343   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:02.491063   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:02.990661   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:03.490670   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:03.991285   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:04.490825   80068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:18:04.558818   80068 kubeadm.go:1081] duration metric: took 11.82106172s to wait for elevateKubeSystemPrivileges.
	I1127 11:18:04.558858   80068 kubeadm.go:406] StartCluster complete in 22.060281635s
	I1127 11:18:04.558893   80068 settings.go:142] acquiring lock: {Name:mkff9c1e77c1a71ba60e8e9acbffbd8799fc8519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:18:04.559025   80068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:18:04.559421   80068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/kubeconfig: {Name:mke9c53ad28720f96b51e42e525b68d1097488ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:18:04.559637   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:18:04.559744   80068 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1127 11:18:04.559854   80068 addons.go:69] Setting default-storageclass=true in profile "addons-112776"
	I1127 11:18:04.559872   80068 addons.go:69] Setting inspektor-gadget=true in profile "addons-112776"
	I1127 11:18:04.559886   80068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-112776"
	I1127 11:18:04.559899   80068 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-112776"
	I1127 11:18:04.559908   80068 addons.go:231] Setting addon inspektor-gadget=true in "addons-112776"
	I1127 11:18:04.559898   80068 addons.go:69] Setting registry=true in profile "addons-112776"
	I1127 11:18:04.559923   80068 config.go:182] Loaded profile config "addons-112776": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:18:04.559920   80068 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-112776"
	I1127 11:18:04.559936   80068 addons.go:69] Setting metrics-server=true in profile "addons-112776"
	I1127 11:18:04.559959   80068 addons.go:231] Setting addon metrics-server=true in "addons-112776"
	I1127 11:18:04.559928   80068 addons.go:231] Setting addon registry=true in "addons-112776"
	I1127 11:18:04.559977   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.560005   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.560005   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.560175   80068 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-112776"
	I1127 11:18:04.560194   80068 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-112776"
	I1127 11:18:04.560237   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.560325   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.560347   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.560464   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.560494   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.560504   80068 addons.go:69] Setting storage-provisioner=true in profile "addons-112776"
	I1127 11:18:04.560525   80068 addons.go:231] Setting addon storage-provisioner=true in "addons-112776"
	I1127 11:18:04.560551   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.560567   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.560677   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.559858   80068 addons.go:69] Setting volumesnapshots=true in profile "addons-112776"
	I1127 11:18:04.560817   80068 addons.go:231] Setting addon volumesnapshots=true in "addons-112776"
	I1127 11:18:04.560886   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.560959   80068 addons.go:69] Setting gcp-auth=true in profile "addons-112776"
	I1127 11:18:04.560975   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.560989   80068 mustload.go:65] Loading cluster: addons-112776
	I1127 11:18:04.561179   80068 config.go:182] Loaded profile config "addons-112776": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:18:04.561442   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.561478   80068 addons.go:69] Setting ingress-dns=true in profile "addons-112776"
	I1127 11:18:04.561504   80068 addons.go:231] Setting addon ingress-dns=true in "addons-112776"
	I1127 11:18:04.561558   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.562438   80068 addons.go:69] Setting cloud-spanner=true in profile "addons-112776"
	I1127 11:18:04.562527   80068 addons.go:231] Setting addon cloud-spanner=true in "addons-112776"
	I1127 11:18:04.562605   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.562860   80068 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-112776"
	I1127 11:18:04.562925   80068 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-112776"
	I1127 11:18:04.562968   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.561443   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.563212   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.561457   80068 addons.go:69] Setting helm-tiller=true in profile "addons-112776"
	I1127 11:18:04.563463   80068 addons.go:231] Setting addon helm-tiller=true in "addons-112776"
	I1127 11:18:04.563487   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.563548   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.561468   80068 addons.go:69] Setting ingress=true in profile "addons-112776"
	I1127 11:18:04.567016   80068 addons.go:231] Setting addon ingress=true in "addons-112776"
	I1127 11:18:04.567122   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.567695   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.596239   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.597159   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.597284   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.601249   80068 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1127 11:18:04.598670   80068 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-112776"
	I1127 11:18:04.603253   80068 addons.go:231] Setting addon default-storageclass=true in "addons-112776"
	I1127 11:18:04.606654   80068 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1127 11:18:04.605364   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.605516   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:04.607558   80068 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-112776" context rescaled to 1 replicas
	I1127 11:18:04.608186   80068 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 11:18:04.611397   80068 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1127 11:18:04.608831   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.609894   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1127 11:18:04.612996   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.613032   80068 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1127 11:18:04.613049   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1127 11:18:04.613099   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.609909   80068 out.go:177]   - Using image docker.io/registry:2.8.3
	I1127 11:18:04.609946   80068 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 11:18:04.610405   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:04.615983   80068 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1127 11:18:04.615054   80068 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1127 11:18:04.617286   80068 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 11:18:04.617324   80068 out.go:177] * Verifying Kubernetes components...
	I1127 11:18:04.617331   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 11:18:04.618792   80068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:18:04.617402   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.617409   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1127 11:18:04.619199   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.621842   80068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:18:04.629898   80068 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1127 11:18:04.631207   80068 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1127 11:18:04.631228   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1127 11:18:04.631292   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.629852   80068 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:18:04.631535   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:18:04.631580   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.634538   80068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1127 11:18:04.636161   80068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 11:18:04.637644   80068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 11:18:04.639085   80068 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 11:18:04.639102   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1127 11:18:04.639156   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.639285   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.648348   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1127 11:18:04.650175   80068 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1127 11:18:04.650201   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1127 11:18:04.650277   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.659189   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1127 11:18:04.671600   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1127 11:18:04.674782   80068 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1127 11:18:04.676586   80068 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1127 11:18:04.676610   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1127 11:18:04.676671   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.679103   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1127 11:18:04.684231   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1127 11:18:04.685841   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1127 11:18:04.684041   80068 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:18:04.684075   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.691002   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.691042   80068 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1127 11:18:04.695856   80068 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 11:18:04.695878   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1127 11:18:04.695940   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.691129   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:18:04.697881   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1127 11:18:04.696467   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.698991   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.700927   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1127 11:18:04.699217   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.701151   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.705554   80068 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1127 11:18:04.704206   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.708022   80068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1127 11:18:04.709609   80068 out.go:177]   - Using image docker.io/busybox:stable
	I1127 11:18:04.711012   80068 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 11:18:04.711028   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1127 11:18:04.711079   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.709660   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1127 11:18:04.711268   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1127 11:18:04.711320   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:04.717839   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.723916   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.725633   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.726242   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.731728   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.733905   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:04.766784   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 11:18:04.767741   80068 node_ready.go:35] waiting up to 6m0s for node "addons-112776" to be "Ready" ...
	I1127 11:18:04.959039   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 11:18:05.051718   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1127 11:18:05.162754   80068 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1127 11:18:05.162808   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1127 11:18:05.240277   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1127 11:18:05.240309   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1127 11:18:05.250073   80068 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 11:18:05.250154   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1127 11:18:05.252100   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 11:18:05.253079   80068 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1127 11:18:05.253107   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1127 11:18:05.258167   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 11:18:05.342535   80068 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1127 11:18:05.342644   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1127 11:18:05.345372   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:18:05.347155   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 11:18:05.348880   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:18:05.349913   80068 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1127 11:18:05.349969   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1127 11:18:05.358554   80068 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1127 11:18:05.358632   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1127 11:18:05.541351   80068 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 11:18:05.541435   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 11:18:05.547429   80068 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 11:18:05.547511   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1127 11:18:05.550743   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1127 11:18:05.550820   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1127 11:18:05.646485   80068 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1127 11:18:05.646571   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1127 11:18:05.743947   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 11:18:05.765682   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1127 11:18:05.765755   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1127 11:18:05.862286   80068 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1127 11:18:05.862319   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1127 11:18:05.953896   80068 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:18:05.953951   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 11:18:06.141223   80068 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1127 11:18:06.141325   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1127 11:18:06.153792   80068 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1127 11:18:06.153903   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1127 11:18:06.153814   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1127 11:18:06.160027   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1127 11:18:06.160104   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1127 11:18:06.341979   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:18:06.353093   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1127 11:18:06.353127   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1127 11:18:06.544720   80068 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1127 11:18:06.544809   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1127 11:18:06.556375   80068 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1127 11:18:06.556408   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1127 11:18:06.747841   80068 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1127 11:18:06.747918   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1127 11:18:06.943811   80068 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 11:18:06.943905   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1127 11:18:07.045809   80068 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1127 11:18:07.045907   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1127 11:18:07.148862   80068 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1127 11:18:07.148944   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1127 11:18:07.161744   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:07.248075   80068 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.481244426s)
	I1127 11:18:07.248178   80068 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 11:18:07.248294   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.289218309s)
	I1127 11:18:07.354787   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 11:18:07.555840   80068 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1127 11:18:07.555945   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1127 11:18:07.652966   80068 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1127 11:18:07.653060   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1127 11:18:07.940566   80068 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 11:18:07.940661   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1127 11:18:07.957983   80068 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1127 11:18:07.958011   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1127 11:18:08.156942   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 11:18:08.257332   80068 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 11:18:08.257428   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1127 11:18:08.742512   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 11:18:08.958158   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.906337415s)
	I1127 11:18:09.641949   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:11.348454   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.096303067s)
	I1127 11:18:11.348496   80068 addons.go:467] Verifying addon ingress=true in "addons-112776"
	I1127 11:18:11.350510   80068 out.go:177] * Verifying ingress addon...
	I1127 11:18:11.348580   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.090338471s)
	I1127 11:18:11.348618   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.00315761s)
	I1127 11:18:11.348665   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.001441773s)
	I1127 11:18:11.348740   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.999801268s)
	I1127 11:18:11.348784   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.604737757s)
	I1127 11:18:11.348818   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.194786039s)
	I1127 11:18:11.348897   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.00688626s)
	I1127 11:18:11.349011   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.994124625s)
	I1127 11:18:11.349096   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.192043994s)
	I1127 11:18:11.352172   80068 addons.go:467] Verifying addon metrics-server=true in "addons-112776"
	I1127 11:18:11.352176   80068 addons.go:467] Verifying addon registry=true in "addons-112776"
	I1127 11:18:11.354150   80068 out.go:177] * Verifying registry addon...
	W1127 11:18:11.352265   80068 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 11:18:11.353144   80068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1127 11:18:11.355703   80068 retry.go:31] will retry after 254.753311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 11:18:11.356628   80068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1127 11:18:11.360332   80068 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1127 11:18:11.361722   80068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1127 11:18:11.361750   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:11.362268   80068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 11:18:11.362290   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:11.365233   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:11.365852   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:11.404824   80068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1127 11:18:11.404909   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:11.423112   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:11.556317   80068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1127 11:18:11.574231   80068 addons.go:231] Setting addon gcp-auth=true in "addons-112776"
	I1127 11:18:11.574309   80068 host.go:66] Checking if "addons-112776" exists ...
	I1127 11:18:11.574812   80068 cli_runner.go:164] Run: docker container inspect addons-112776 --format={{.State.Status}}
	I1127 11:18:11.592453   80068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1127 11:18:11.592511   80068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-112776
	I1127 11:18:11.611176   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 11:18:11.611476   80068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/addons-112776/id_rsa Username:docker}
	I1127 11:18:11.869852   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:11.870042   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:11.967563   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:12.176379   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.433731705s)
	I1127 11:18:12.176424   80068 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-112776"
	I1127 11:18:12.179073   80068 out.go:177] * Verifying csi-hostpath-driver addon...
	I1127 11:18:12.181506   80068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1127 11:18:12.245119   80068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 11:18:12.245153   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:12.249312   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:12.370134   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:12.370426   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:12.743258   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.132022981s)
	I1127 11:18:12.743284   80068 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.150799943s)
	I1127 11:18:12.745231   80068 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1127 11:18:12.747021   80068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 11:18:12.748597   80068 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1127 11:18:12.748622   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1127 11:18:12.753636   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:12.766239   80068 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1127 11:18:12.766264   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1127 11:18:12.783069   80068 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 11:18:12.783093   80068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1127 11:18:12.800336   80068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 11:18:12.871420   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:12.871624   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:13.255037   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:13.369948   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:13.370746   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:13.761663   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:13.847991   80068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.047600466s)
	I1127 11:18:13.849685   80068 addons.go:467] Verifying addon gcp-auth=true in "addons-112776"
	I1127 11:18:13.852764   80068 out.go:177] * Verifying gcp-auth addon...
	I1127 11:18:13.855430   80068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1127 11:18:13.859151   80068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1127 11:18:13.859182   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:13.863489   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:13.944273   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:13.944910   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:13.969194   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:14.255024   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:14.368502   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:14.370020   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:14.370038   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:14.754931   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:14.867392   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:14.870201   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:14.870687   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:15.254738   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:15.368326   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:15.369694   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:15.370558   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:15.755125   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:15.867674   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:15.869383   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:15.869936   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:16.255122   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:16.367721   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:16.369992   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:16.370140   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:16.468304   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:16.754109   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:16.867362   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:16.869619   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:16.869932   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:17.254815   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:17.367754   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:17.369703   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:17.370070   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:17.754665   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:17.867989   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:17.869407   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:17.870001   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:18.253900   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:18.367690   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:18.369134   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:18.370135   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:18.754498   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:18.867604   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:18.868841   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:18.870099   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:18.967999   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:19.254331   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:19.366851   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:19.370065   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:19.370289   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:19.754078   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:19.867129   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:19.869774   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:19.869786   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:20.254129   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:20.367187   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:20.369809   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:20.369935   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:20.754171   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:20.867381   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:20.869959   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:20.869993   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:21.253936   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:21.367349   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:21.369657   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:21.369747   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:21.467428   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:21.753408   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:21.867425   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:21.869002   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:21.869734   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:22.253491   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:22.367583   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:22.369075   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:22.369896   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:22.754238   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:22.866954   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:22.869531   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:22.869737   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:23.253904   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:23.366976   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:23.369289   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:23.369485   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:23.468471   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:23.754533   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:23.867528   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:23.869032   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:23.871322   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:24.254536   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:24.367323   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:24.368627   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:24.369599   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:24.753038   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:24.866638   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:24.868975   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:24.869214   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:25.253826   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:25.367626   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:25.369084   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:25.369346   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:25.753799   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:25.867789   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:25.869114   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:25.869179   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:25.967981   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:26.253889   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:26.367764   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:26.368755   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:26.369772   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:26.753391   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:26.867436   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:26.869770   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:26.870055   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:27.253579   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:27.367619   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:27.369482   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:27.369997   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:27.753658   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:27.867530   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:27.868952   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:27.869768   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:28.253235   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:28.367302   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:28.369647   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:28.369688   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:28.467642   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:28.753372   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:28.867205   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:28.868825   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:28.869553   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:29.253992   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:29.366751   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:29.369189   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:29.369222   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:29.754053   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:29.867065   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:29.869239   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:29.869444   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:30.253215   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:30.367206   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:30.369404   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:30.369533   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:30.467852   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:30.754341   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:30.867188   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:30.869605   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:30.869905   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:31.253959   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:31.366836   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:31.369035   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:31.369215   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:31.754066   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:31.866953   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:31.869559   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:31.869583   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:32.254304   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:32.367252   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:32.368871   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:32.369588   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:32.753273   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:32.867086   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:32.869473   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:32.869650   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:32.968406   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:33.253601   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:33.367725   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:33.369262   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:33.370059   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:33.753776   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:33.867648   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:33.869093   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:33.869307   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:34.254321   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:34.367181   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:34.368891   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:34.369395   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:34.754186   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:34.867139   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:34.869289   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:34.869744   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:35.254200   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:35.366934   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:35.369487   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:35.369623   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:35.467465   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:35.753316   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:35.867619   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:35.868865   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:35.869404   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:36.253230   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:36.367128   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:36.369489   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:36.369613   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:36.754099   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:36.866835   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:36.869170   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:36.869596   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:37.254738   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:37.367525   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:37.369327   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:37.370204   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:37.468130   80068 node_ready.go:58] node "addons-112776" has status "Ready":"False"
	I1127 11:18:37.754111   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:37.866747   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:37.869433   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:37.869505   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:38.253460   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:38.367009   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:38.369548   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:38.369775   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:38.753226   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:38.866963   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:38.869318   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:38.869332   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:39.254188   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:39.368746   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:39.373848   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:39.374535   80068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 11:18:39.374561   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:39.468123   80068 node_ready.go:49] node "addons-112776" has status "Ready":"True"
	I1127 11:18:39.468150   80068 node_ready.go:38] duration metric: took 34.700385223s waiting for node "addons-112776" to be "Ready" ...
	I1127 11:18:39.468159   80068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:18:39.480591   80068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fpndh" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:39.755648   80068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 11:18:39.755687   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:39.868079   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:39.869832   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:39.870633   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:40.255485   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:40.368364   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:40.371037   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:40.371813   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:40.761972   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:40.868670   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:40.870061   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:40.873126   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:41.255104   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:41.367967   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:41.369525   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:41.370995   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:41.547787   80068 pod_ready.go:102] pod "coredns-5dd5756b68-fpndh" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:41.754706   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:41.867376   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:41.870571   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:41.871907   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:42.254441   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:42.367876   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:42.369440   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:42.369799   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:42.547951   80068 pod_ready.go:92] pod "coredns-5dd5756b68-fpndh" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:42.547980   80068 pod_ready.go:81] duration metric: took 3.067355523s waiting for pod "coredns-5dd5756b68-fpndh" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.548008   80068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.553286   80068 pod_ready.go:92] pod "etcd-addons-112776" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:42.553314   80068 pod_ready.go:81] duration metric: took 5.296741ms waiting for pod "etcd-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.553330   80068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.557940   80068 pod_ready.go:92] pod "kube-apiserver-addons-112776" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:42.557962   80068 pod_ready.go:81] duration metric: took 4.623066ms waiting for pod "kube-apiserver-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.557976   80068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.562978   80068 pod_ready.go:92] pod "kube-controller-manager-addons-112776" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:42.562998   80068 pod_ready.go:81] duration metric: took 5.013491ms waiting for pod "kube-controller-manager-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.563009   80068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g8gm6" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.668589   80068 pod_ready.go:92] pod "kube-proxy-g8gm6" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:42.668618   80068 pod_ready.go:81] duration metric: took 105.600786ms waiting for pod "kube-proxy-g8gm6" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.668632   80068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:42.754986   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:42.867601   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:42.869137   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:42.870327   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:43.069183   80068 pod_ready.go:92] pod "kube-scheduler-addons-112776" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:43.069208   80068 pod_ready.go:81] duration metric: took 400.568352ms waiting for pod "kube-scheduler-addons-112776" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:43.069218   80068 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-fj2dt" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:43.255511   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:43.368198   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:43.368946   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:43.370481   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:43.754203   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:43.867877   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:43.869340   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:43.870434   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:44.254940   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:44.367416   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:44.368909   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:44.370531   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:44.373308   80068 pod_ready.go:92] pod "metrics-server-7c66d45ddc-fj2dt" in "kube-system" namespace has status "Ready":"True"
	I1127 11:18:44.373332   80068 pod_ready.go:81] duration metric: took 1.304107165s waiting for pod "metrics-server-7c66d45ddc-fj2dt" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:44.373345   80068 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace to be "Ready" ...
	I1127 11:18:44.754717   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:44.867083   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:44.870074   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:44.870272   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:45.255110   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:45.367919   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:45.369563   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:45.370649   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:45.757654   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:45.867954   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:45.869606   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:45.870257   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:46.254929   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:46.367491   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:46.370124   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:46.370239   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:46.575904   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:46.754718   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:46.867221   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:46.870496   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:46.870502   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:47.255017   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:47.421214   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:47.422008   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:47.422051   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:47.766609   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:48.003827   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:48.004187   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:48.004263   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:48.254787   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:48.367241   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:48.370173   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:48.370458   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:48.576659   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:48.755780   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:48.868699   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:48.870387   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:48.871127   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:49.255651   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:49.368531   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:49.369722   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:49.370019   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:49.755115   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:49.868371   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:49.871390   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:49.871509   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:50.255118   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:50.367599   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:50.369404   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:50.370519   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:50.755835   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:50.867546   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:50.869931   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:50.870036   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:51.075203   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:51.254784   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:51.367738   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:51.374786   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:51.375527   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:51.754067   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:51.867858   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:51.869382   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:51.870505   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:52.254821   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:52.368249   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:52.369845   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:52.370212   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:52.755661   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:52.867068   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:52.943469   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:52.944771   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:53.147567   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:53.259289   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:53.443608   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:53.444937   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:53.446215   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:53.755391   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:53.868558   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:53.871527   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:53.872672   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:54.255263   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:54.367852   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:54.369066   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:54.370124   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:54.755223   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:54.867715   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:54.870408   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:54.870719   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:55.255066   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:55.367060   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:55.372634   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:55.372729   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:55.575298   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:55.756005   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:55.867633   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:55.870294   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:55.871081   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:56.255960   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:56.367757   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:56.370525   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:56.371068   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:56.754963   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:56.867583   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:56.869135   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:56.870821   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:57.257295   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:57.366970   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:57.369636   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:57.369936   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:57.754790   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:57.867178   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:57.870203   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:57.870385   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:58.076954   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:18:58.255189   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:58.367693   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:58.371553   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:58.372572   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:58.756298   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:58.867695   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:58.870204   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:58.870299   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:59.254985   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:59.367889   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:59.369317   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:59.370549   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:18:59.755369   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:18:59.867877   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:18:59.869606   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:18:59.870500   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:00.254495   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:00.367722   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:00.369203   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:00.373397   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:00.575659   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:19:00.754971   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:00.869648   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:00.871572   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:00.871588   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:01.254342   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:01.368215   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:01.369742   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:01.369977   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:01.756582   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:01.868422   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:01.869975   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:01.871231   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:02.256069   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:02.367366   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:02.371217   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:02.372262   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:02.575851   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:19:02.755082   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:02.868418   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:02.871657   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:02.872556   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:03.255897   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:03.367929   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:03.371154   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:03.371383   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:03.755509   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:03.868688   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:03.869830   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:03.870694   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:04.255196   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:04.367943   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:04.369824   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:04.370801   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:04.756109   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:04.867692   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:04.869227   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:04.870589   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:05.074887   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:19:05.254676   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:05.367084   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:05.370004   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:05.370039   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:05.754882   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:05.867437   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:05.870304   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:05.870414   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:06.255289   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:06.442685   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:06.445690   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:06.449677   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:06.843465   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:06.867209   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:06.942795   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:06.950437   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:07.075965   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:19:07.255113   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:07.368985   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:07.369621   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:07.370696   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:07.757367   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:07.867168   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:07.870245   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:07.871257   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:08.256348   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:08.372220   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:08.373755   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:08.373942   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:08.756244   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:08.870059   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:08.871295   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:08.872011   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:09.255566   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:09.368308   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:09.369686   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:09.374215   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:09.575943   80068 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"False"
	I1127 11:19:09.755879   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:09.867030   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:09.869635   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:09.869902   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:10.255082   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:10.367656   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:10.369191   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:10.370201   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:10.754978   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:10.867615   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:10.869116   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:10.870166   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:11.255137   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:11.367688   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:11.369201   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:11.369967   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:11.574802   80068 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace has status "Ready":"True"
	I1127 11:19:11.574828   80068 pod_ready.go:81] duration metric: took 27.20147373s waiting for pod "nvidia-device-plugin-daemonset-t78st" in "kube-system" namespace to be "Ready" ...
	I1127 11:19:11.574852   80068 pod_ready.go:38] duration metric: took 32.106681417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:19:11.574874   80068 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:19:11.574912   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 11:19:11.574973   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 11:19:11.613825   80068 cri.go:89] found id: "627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea"
	I1127 11:19:11.613850   80068 cri.go:89] found id: ""
	I1127 11:19:11.613860   80068 logs.go:284] 1 containers: [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea]
	I1127 11:19:11.613904   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.617250   80068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 11:19:11.617321   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 11:19:11.651597   80068 cri.go:89] found id: "c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16"
	I1127 11:19:11.651624   80068 cri.go:89] found id: ""
	I1127 11:19:11.651633   80068 logs.go:284] 1 containers: [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16]
	I1127 11:19:11.651698   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.654954   80068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 11:19:11.655020   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 11:19:11.687962   80068 cri.go:89] found id: "c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f"
	I1127 11:19:11.687983   80068 cri.go:89] found id: ""
	I1127 11:19:11.687991   80068 logs.go:284] 1 containers: [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f]
	I1127 11:19:11.688049   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.691235   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 11:19:11.691294   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 11:19:11.723172   80068 cri.go:89] found id: "343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c"
	I1127 11:19:11.723201   80068 cri.go:89] found id: ""
	I1127 11:19:11.723211   80068 logs.go:284] 1 containers: [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c]
	I1127 11:19:11.723263   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.726463   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 11:19:11.726522   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 11:19:11.755845   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:11.759420   80068 cri.go:89] found id: "a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c"
	I1127 11:19:11.759439   80068 cri.go:89] found id: ""
	I1127 11:19:11.759448   80068 logs.go:284] 1 containers: [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c]
	I1127 11:19:11.759502   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.762703   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 11:19:11.762761   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 11:19:11.795812   80068 cri.go:89] found id: "5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db"
	I1127 11:19:11.795834   80068 cri.go:89] found id: ""
	I1127 11:19:11.795841   80068 logs.go:284] 1 containers: [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db]
	I1127 11:19:11.795883   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.799068   80068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 11:19:11.799119   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 11:19:11.830803   80068 cri.go:89] found id: "37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32"
	I1127 11:19:11.830829   80068 cri.go:89] found id: ""
	I1127 11:19:11.830836   80068 logs.go:284] 1 containers: [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32]
	I1127 11:19:11.830884   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:11.834168   80068 logs.go:123] Gathering logs for etcd [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16] ...
	I1127 11:19:11.834224   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16"
	I1127 11:19:11.869212   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:11.870317   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:11.871022   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:11.881186   80068 logs.go:123] Gathering logs for coredns [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f] ...
	I1127 11:19:11.881213   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f"
	I1127 11:19:11.912715   80068 logs.go:123] Gathering logs for kube-proxy [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c] ...
	I1127 11:19:11.912747   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c"
	I1127 11:19:11.944748   80068 logs.go:123] Gathering logs for kube-controller-manager [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db] ...
	I1127 11:19:11.944775   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db"
	I1127 11:19:11.998469   80068 logs.go:123] Gathering logs for kubelet ...
	I1127 11:19:11.998506   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1127 11:19:12.038755   80068 logs.go:138] Found kubelet problem: Nov 27 11:18:04 addons-112776 kubelet[1551]: W1127 11:18:04.757741    1551 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	W1127 11:19:12.038924   80068 logs.go:138] Found kubelet problem: Nov 27 11:18:04 addons-112776 kubelet[1551]: E1127 11:18:04.757795    1551 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	I1127 11:19:12.070176   80068 logs.go:123] Gathering logs for dmesg ...
	I1127 11:19:12.070214   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 11:19:12.084958   80068 logs.go:123] Gathering logs for describe nodes ...
	I1127 11:19:12.084992   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 11:19:12.185488   80068 logs.go:123] Gathering logs for kube-apiserver [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea] ...
	I1127 11:19:12.185521   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea"
	I1127 11:19:12.229876   80068 logs.go:123] Gathering logs for kindnet [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32] ...
	I1127 11:19:12.229906   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32"
	I1127 11:19:12.255038   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:12.262329   80068 logs.go:123] Gathering logs for container status ...
	I1127 11:19:12.262357   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 11:19:12.302563   80068 logs.go:123] Gathering logs for kube-scheduler [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c] ...
	I1127 11:19:12.302609   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c"
	I1127 11:19:12.345420   80068 logs.go:123] Gathering logs for CRI-O ...
	I1127 11:19:12.345453   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 11:19:12.367175   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:12.369933   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:12.370029   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:12.419131   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:12.419162   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1127 11:19:12.419224   80068 out.go:239] X Problems detected in kubelet:
	W1127 11:19:12.419235   80068 out.go:239]   Nov 27 11:18:04 addons-112776 kubelet[1551]: W1127 11:18:04.757741    1551 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	W1127 11:19:12.419242   80068 out.go:239]   Nov 27 11:18:04 addons-112776 kubelet[1551]: E1127 11:18:04.757795    1551 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	I1127 11:19:12.419254   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:12.419259   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:19:12.754869   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:12.867842   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:12.869611   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:12.871007   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:13.256021   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:13.367924   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:13.370327   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:13.372065   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:13.754886   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:13.867579   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:13.869270   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:13.870109   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:14.255330   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:14.367472   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:14.369937   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:14.370081   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:14.755633   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:14.867879   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:14.869913   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:14.870188   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:15.255497   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:15.366789   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:15.369371   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:15.370060   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:15.754995   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:15.868389   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:15.869478   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:15.870807   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:16.254566   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:16.367269   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:16.369849   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:16.370259   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 11:19:16.754923   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:16.867611   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:16.869366   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:16.870379   80068 kapi.go:107] duration metric: took 1m5.513747428s to wait for kubernetes.io/minikube-addons=registry ...
	I1127 11:19:17.255247   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:17.367836   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:17.369073   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:17.754519   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:17.866773   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:17.869606   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:18.255750   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:18.367112   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:18.370191   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:18.754898   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:18.867263   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:18.869760   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:19.256236   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:19.444719   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 11:19:19.449600   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:19.763786   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:19.868405   80068 kapi.go:107] duration metric: took 1m6.012973071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1127 11:19:19.870540   80068 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-112776 cluster.
	I1127 11:19:19.872036   80068 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1127 11:19:19.941396   80068 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1127 11:19:19.942957   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:20.256111   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:20.371103   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:20.754976   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:20.870029   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:21.255400   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:21.370043   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:21.754930   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:21.871324   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:22.255070   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:22.369467   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:22.420761   80068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:19:22.434841   80068 api_server.go:72] duration metric: took 1m17.820177366s to wait for apiserver process to appear ...
	I1127 11:19:22.434872   80068 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:19:22.434916   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 11:19:22.434977   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 11:19:22.471708   80068 cri.go:89] found id: "627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea"
	I1127 11:19:22.471735   80068 cri.go:89] found id: ""
	I1127 11:19:22.471745   80068 logs.go:284] 1 containers: [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea]
	I1127 11:19:22.471791   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.475695   80068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 11:19:22.475754   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 11:19:22.542297   80068 cri.go:89] found id: "c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16"
	I1127 11:19:22.542322   80068 cri.go:89] found id: ""
	I1127 11:19:22.542332   80068 logs.go:284] 1 containers: [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16]
	I1127 11:19:22.542393   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.545996   80068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 11:19:22.546053   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 11:19:22.581891   80068 cri.go:89] found id: "c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f"
	I1127 11:19:22.581920   80068 cri.go:89] found id: ""
	I1127 11:19:22.581931   80068 logs.go:284] 1 containers: [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f]
	I1127 11:19:22.581980   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.585855   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 11:19:22.585922   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 11:19:22.651164   80068 cri.go:89] found id: "343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c"
	I1127 11:19:22.651186   80068 cri.go:89] found id: ""
	I1127 11:19:22.651194   80068 logs.go:284] 1 containers: [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c]
	I1127 11:19:22.651235   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.654494   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 11:19:22.654565   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 11:19:22.691635   80068 cri.go:89] found id: "a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c"
	I1127 11:19:22.691661   80068 cri.go:89] found id: ""
	I1127 11:19:22.691690   80068 logs.go:284] 1 containers: [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c]
	I1127 11:19:22.691746   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.695411   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 11:19:22.695486   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 11:19:22.757260   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:22.767160   80068 cri.go:89] found id: "5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db"
	I1127 11:19:22.767189   80068 cri.go:89] found id: ""
	I1127 11:19:22.767199   80068 logs.go:284] 1 containers: [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db]
	I1127 11:19:22.767252   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.770557   80068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 11:19:22.770616   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 11:19:22.844358   80068 cri.go:89] found id: "37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32"
	I1127 11:19:22.844387   80068 cri.go:89] found id: ""
	I1127 11:19:22.844397   80068 logs.go:284] 1 containers: [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32]
	I1127 11:19:22.844451   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:22.848472   80068 logs.go:123] Gathering logs for dmesg ...
	I1127 11:19:22.848494   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 11:19:22.866374   80068 logs.go:123] Gathering logs for kube-apiserver [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea] ...
	I1127 11:19:22.866405   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea"
	I1127 11:19:22.870669   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:22.957638   80068 logs.go:123] Gathering logs for CRI-O ...
	I1127 11:19:22.957681   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 11:19:23.050990   80068 logs.go:123] Gathering logs for container status ...
	I1127 11:19:23.051049   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 11:19:23.096657   80068 logs.go:123] Gathering logs for kubelet ...
	I1127 11:19:23.096706   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1127 11:19:23.179782   80068 logs.go:138] Found kubelet problem: Nov 27 11:18:04 addons-112776 kubelet[1551]: W1127 11:18:04.757741    1551 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	W1127 11:19:23.179972   80068 logs.go:138] Found kubelet problem: Nov 27 11:18:04 addons-112776 kubelet[1551]: E1127 11:18:04.757795    1551 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	I1127 11:19:23.217450   80068 logs.go:123] Gathering logs for etcd [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16] ...
	I1127 11:19:23.217496   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16"
	I1127 11:19:23.254976   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:23.271082   80068 logs.go:123] Gathering logs for coredns [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f] ...
	I1127 11:19:23.271118   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f"
	I1127 11:19:23.308405   80068 logs.go:123] Gathering logs for kube-scheduler [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c] ...
	I1127 11:19:23.308441   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c"
	I1127 11:19:23.370013   80068 logs.go:123] Gathering logs for kube-proxy [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c] ...
	I1127 11:19:23.370045   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c"
	I1127 11:19:23.370793   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:23.441686   80068 logs.go:123] Gathering logs for kube-controller-manager [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db] ...
	I1127 11:19:23.441732   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db"
	I1127 11:19:23.509174   80068 logs.go:123] Gathering logs for kindnet [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32] ...
	I1127 11:19:23.509210   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32"
	I1127 11:19:23.581930   80068 logs.go:123] Gathering logs for describe nodes ...
	I1127 11:19:23.581982   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 11:19:23.755788   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:23.776917   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:23.776946   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1127 11:19:23.777000   80068 out.go:239] X Problems detected in kubelet:
	W1127 11:19:23.777013   80068 out.go:239]   Nov 27 11:18:04 addons-112776 kubelet[1551]: W1127 11:18:04.757741    1551 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	W1127 11:19:23.777026   80068 out.go:239]   Nov 27 11:18:04 addons-112776 kubelet[1551]: E1127 11:18:04.757795    1551 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	I1127 11:19:23.777036   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:23.777043   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:19:23.869552   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:24.255423   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:24.369982   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:24.807518   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:24.875117   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:25.255895   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:25.370082   80068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 11:19:25.755096   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:25.870010   80068 kapi.go:107] duration metric: took 1m14.516866051s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1127 11:19:26.255357   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:26.755153   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:27.260150   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:27.755861   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:28.255629   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:28.755318   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:29.254413   80068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 11:19:29.754911   80068 kapi.go:107] duration metric: took 1m17.573406517s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1127 11:19:29.756889   80068 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, storage-provisioner, ingress-dns, metrics-server, helm-tiller, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1127 11:19:29.758694   80068 addons.go:502] enable addons completed in 1m25.198968532s: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget storage-provisioner ingress-dns metrics-server helm-tiller default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1127 11:19:33.779353   80068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 11:19:33.784882   80068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 11:19:33.785975   80068 api_server.go:141] control plane version: v1.28.4
	I1127 11:19:33.786000   80068 api_server.go:131] duration metric: took 11.351121422s to wait for apiserver health ...
	I1127 11:19:33.786008   80068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:19:33.786029   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 11:19:33.786071   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 11:19:33.821426   80068 cri.go:89] found id: "627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea"
	I1127 11:19:33.821449   80068 cri.go:89] found id: ""
	I1127 11:19:33.821458   80068 logs.go:284] 1 containers: [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea]
	I1127 11:19:33.821506   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:33.825060   80068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 11:19:33.825118   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 11:19:33.859314   80068 cri.go:89] found id: "c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16"
	I1127 11:19:33.859341   80068 cri.go:89] found id: ""
	I1127 11:19:33.859352   80068 logs.go:284] 1 containers: [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16]
	I1127 11:19:33.859406   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:33.862767   80068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 11:19:33.862842   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 11:19:33.896050   80068 cri.go:89] found id: "c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f"
	I1127 11:19:33.896074   80068 cri.go:89] found id: ""
	I1127 11:19:33.896082   80068 logs.go:284] 1 containers: [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f]
	I1127 11:19:33.896125   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:33.899576   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 11:19:33.899631   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 11:19:33.932036   80068 cri.go:89] found id: "343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c"
	I1127 11:19:33.932062   80068 cri.go:89] found id: ""
	I1127 11:19:33.932069   80068 logs.go:284] 1 containers: [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c]
	I1127 11:19:33.932116   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:33.935451   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 11:19:33.935504   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 11:19:33.967819   80068 cri.go:89] found id: "a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c"
	I1127 11:19:33.967840   80068 cri.go:89] found id: ""
	I1127 11:19:33.967848   80068 logs.go:284] 1 containers: [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c]
	I1127 11:19:33.967898   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:33.971216   80068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 11:19:33.971264   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 11:19:34.008170   80068 cri.go:89] found id: "5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db"
	I1127 11:19:34.008198   80068 cri.go:89] found id: ""
	I1127 11:19:34.008207   80068 logs.go:284] 1 containers: [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db]
	I1127 11:19:34.008258   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:34.012331   80068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 11:19:34.012400   80068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 11:19:34.059466   80068 cri.go:89] found id: "37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32"
	I1127 11:19:34.059493   80068 cri.go:89] found id: ""
	I1127 11:19:34.059503   80068 logs.go:284] 1 containers: [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32]
	I1127 11:19:34.059559   80068 ssh_runner.go:195] Run: which crictl
	I1127 11:19:34.062881   80068 logs.go:123] Gathering logs for kubelet ...
	I1127 11:19:34.062905   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1127 11:19:34.109147   80068 logs.go:138] Found kubelet problem: Nov 27 11:18:04 addons-112776 kubelet[1551]: W1127 11:18:04.757741    1551 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	W1127 11:19:34.109321   80068 logs.go:138] Found kubelet problem: Nov 27 11:18:04 addons-112776 kubelet[1551]: E1127 11:18:04.757795    1551 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	I1127 11:19:34.142228   80068 logs.go:123] Gathering logs for kube-apiserver [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea] ...
	I1127 11:19:34.142267   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea"
	I1127 11:19:34.185562   80068 logs.go:123] Gathering logs for etcd [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16] ...
	I1127 11:19:34.185599   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16"
	I1127 11:19:34.228309   80068 logs.go:123] Gathering logs for coredns [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f] ...
	I1127 11:19:34.228343   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f"
	I1127 11:19:34.262763   80068 logs.go:123] Gathering logs for kube-scheduler [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c] ...
	I1127 11:19:34.262793   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c"
	I1127 11:19:34.300573   80068 logs.go:123] Gathering logs for CRI-O ...
	I1127 11:19:34.300601   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 11:19:34.374362   80068 logs.go:123] Gathering logs for dmesg ...
	I1127 11:19:34.374403   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 11:19:34.388753   80068 logs.go:123] Gathering logs for describe nodes ...
	I1127 11:19:34.388784   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 11:19:34.486264   80068 logs.go:123] Gathering logs for kube-proxy [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c] ...
	I1127 11:19:34.486298   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c"
	I1127 11:19:34.519225   80068 logs.go:123] Gathering logs for kube-controller-manager [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db] ...
	I1127 11:19:34.519256   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db"
	I1127 11:19:34.573763   80068 logs.go:123] Gathering logs for kindnet [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32] ...
	I1127 11:19:34.573800   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32"
	I1127 11:19:34.606598   80068 logs.go:123] Gathering logs for container status ...
	I1127 11:19:34.606631   80068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 11:19:34.652248   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:34.652278   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1127 11:19:34.652341   80068 out.go:239] X Problems detected in kubelet:
	W1127 11:19:34.652355   80068 out.go:239]   Nov 27 11:18:04 addons-112776 kubelet[1551]: W1127 11:18:04.757741    1551 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	W1127 11:19:34.652363   80068 out.go:239]   Nov 27 11:18:04 addons-112776 kubelet[1551]: E1127 11:18:04.757795    1551 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-112776" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-112776' and this object
	I1127 11:19:34.652378   80068 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:34.652390   80068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:19:44.662729   80068 system_pods.go:59] 19 kube-system pods found
	I1127 11:19:44.662772   80068 system_pods.go:61] "coredns-5dd5756b68-fpndh" [8f5e2765-fda4-41d0-bdad-02357eb272c8] Running
	I1127 11:19:44.662779   80068 system_pods.go:61] "csi-hostpath-attacher-0" [cda04a1e-f882-4d73-95a0-78127b27610d] Running
	I1127 11:19:44.662783   80068 system_pods.go:61] "csi-hostpath-resizer-0" [9c4e9799-8873-4c3e-81d9-cf63fbdb6cbd] Running
	I1127 11:19:44.662787   80068 system_pods.go:61] "csi-hostpathplugin-rx4gv" [527339e2-0a6f-4756-bf77-d2b46d768ade] Running
	I1127 11:19:44.662791   80068 system_pods.go:61] "etcd-addons-112776" [ca93dda1-3aef-4af4-8316-e85fbb0b97fa] Running
	I1127 11:19:44.662795   80068 system_pods.go:61] "kindnet-fkm7v" [fccad576-29db-4d30-b66f-e942a2bf3c9a] Running
	I1127 11:19:44.662799   80068 system_pods.go:61] "kube-apiserver-addons-112776" [0ccf9176-4f06-401d-be8c-52306ed81be1] Running
	I1127 11:19:44.662804   80068 system_pods.go:61] "kube-controller-manager-addons-112776" [87558c8e-6ad4-4572-8b44-a2958ba10fff] Running
	I1127 11:19:44.662808   80068 system_pods.go:61] "kube-ingress-dns-minikube" [dd9e6c5e-83af-4aeb-9939-0255324ec091] Running
	I1127 11:19:44.662813   80068 system_pods.go:61] "kube-proxy-g8gm6" [c802c1d1-fd5f-419c-92f9-1339ecbfe712] Running
	I1127 11:19:44.662816   80068 system_pods.go:61] "kube-scheduler-addons-112776" [e26233fd-fd35-4b53-b08c-5677a73e0777] Running
	I1127 11:19:44.662825   80068 system_pods.go:61] "metrics-server-7c66d45ddc-fj2dt" [89070c3d-2674-4c90-8ea8-0fd92dd023df] Running
	I1127 11:19:44.662829   80068 system_pods.go:61] "nvidia-device-plugin-daemonset-t78st" [fbcd2671-0323-4c2a-81c3-f3d3726e355b] Running
	I1127 11:19:44.662836   80068 system_pods.go:61] "registry-lmltk" [2af7cf8b-6b3b-4728-be19-f6cb5e9d7195] Running
	I1127 11:19:44.662840   80068 system_pods.go:61] "registry-proxy-gcphz" [7d830023-3b79-4d5f-b0ed-5cd31be11e05] Running
	I1127 11:19:44.662846   80068 system_pods.go:61] "snapshot-controller-58dbcc7b99-bnxxq" [42ecd7c9-eda1-4c73-8e92-2f763ae3272b] Running
	I1127 11:19:44.662850   80068 system_pods.go:61] "snapshot-controller-58dbcc7b99-lvsh5" [be5d1f77-c675-4b73-afda-ee8c0587985a] Running
	I1127 11:19:44.662858   80068 system_pods.go:61] "storage-provisioner" [2c4cde40-e3db-45ef-b3f1-ea131a62e301] Running
	I1127 11:19:44.662863   80068 system_pods.go:61] "tiller-deploy-7b677967b9-jlvnj" [02d82240-a512-4369-8110-df7c8846c5b5] Running
	I1127 11:19:44.662876   80068 system_pods.go:74] duration metric: took 10.876860232s to wait for pod list to return data ...
	I1127 11:19:44.662888   80068 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:19:44.665080   80068 default_sa.go:45] found service account: "default"
	I1127 11:19:44.665101   80068 default_sa.go:55] duration metric: took 2.202701ms for default service account to be created ...
	I1127 11:19:44.665108   80068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:19:44.672892   80068 system_pods.go:86] 19 kube-system pods found
	I1127 11:19:44.672918   80068 system_pods.go:89] "coredns-5dd5756b68-fpndh" [8f5e2765-fda4-41d0-bdad-02357eb272c8] Running
	I1127 11:19:44.672924   80068 system_pods.go:89] "csi-hostpath-attacher-0" [cda04a1e-f882-4d73-95a0-78127b27610d] Running
	I1127 11:19:44.672928   80068 system_pods.go:89] "csi-hostpath-resizer-0" [9c4e9799-8873-4c3e-81d9-cf63fbdb6cbd] Running
	I1127 11:19:44.672932   80068 system_pods.go:89] "csi-hostpathplugin-rx4gv" [527339e2-0a6f-4756-bf77-d2b46d768ade] Running
	I1127 11:19:44.672936   80068 system_pods.go:89] "etcd-addons-112776" [ca93dda1-3aef-4af4-8316-e85fbb0b97fa] Running
	I1127 11:19:44.672940   80068 system_pods.go:89] "kindnet-fkm7v" [fccad576-29db-4d30-b66f-e942a2bf3c9a] Running
	I1127 11:19:44.672945   80068 system_pods.go:89] "kube-apiserver-addons-112776" [0ccf9176-4f06-401d-be8c-52306ed81be1] Running
	I1127 11:19:44.672951   80068 system_pods.go:89] "kube-controller-manager-addons-112776" [87558c8e-6ad4-4572-8b44-a2958ba10fff] Running
	I1127 11:19:44.672958   80068 system_pods.go:89] "kube-ingress-dns-minikube" [dd9e6c5e-83af-4aeb-9939-0255324ec091] Running
	I1127 11:19:44.672964   80068 system_pods.go:89] "kube-proxy-g8gm6" [c802c1d1-fd5f-419c-92f9-1339ecbfe712] Running
	I1127 11:19:44.672975   80068 system_pods.go:89] "kube-scheduler-addons-112776" [e26233fd-fd35-4b53-b08c-5677a73e0777] Running
	I1127 11:19:44.672984   80068 system_pods.go:89] "metrics-server-7c66d45ddc-fj2dt" [89070c3d-2674-4c90-8ea8-0fd92dd023df] Running
	I1127 11:19:44.672994   80068 system_pods.go:89] "nvidia-device-plugin-daemonset-t78st" [fbcd2671-0323-4c2a-81c3-f3d3726e355b] Running
	I1127 11:19:44.672998   80068 system_pods.go:89] "registry-lmltk" [2af7cf8b-6b3b-4728-be19-f6cb5e9d7195] Running
	I1127 11:19:44.673001   80068 system_pods.go:89] "registry-proxy-gcphz" [7d830023-3b79-4d5f-b0ed-5cd31be11e05] Running
	I1127 11:19:44.673005   80068 system_pods.go:89] "snapshot-controller-58dbcc7b99-bnxxq" [42ecd7c9-eda1-4c73-8e92-2f763ae3272b] Running
	I1127 11:19:44.673012   80068 system_pods.go:89] "snapshot-controller-58dbcc7b99-lvsh5" [be5d1f77-c675-4b73-afda-ee8c0587985a] Running
	I1127 11:19:44.673020   80068 system_pods.go:89] "storage-provisioner" [2c4cde40-e3db-45ef-b3f1-ea131a62e301] Running
	I1127 11:19:44.673026   80068 system_pods.go:89] "tiller-deploy-7b677967b9-jlvnj" [02d82240-a512-4369-8110-df7c8846c5b5] Running
	I1127 11:19:44.673034   80068 system_pods.go:126] duration metric: took 7.919969ms to wait for k8s-apps to be running ...
	I1127 11:19:44.673042   80068 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:19:44.673099   80068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:19:44.684228   80068 system_svc.go:56] duration metric: took 11.171532ms WaitForService to wait for kubelet.
	I1127 11:19:44.684260   80068 kubeadm.go:581] duration metric: took 1m40.069601343s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:19:44.684289   80068 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:19:44.687270   80068 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 11:19:44.687293   80068 node_conditions.go:123] node cpu capacity is 8
	I1127 11:19:44.687304   80068 node_conditions.go:105] duration metric: took 3.009088ms to run NodePressure ...
	I1127 11:19:44.687316   80068 start.go:228] waiting for startup goroutines ...
	I1127 11:19:44.687322   80068 start.go:233] waiting for cluster config update ...
	I1127 11:19:44.687336   80068 start.go:242] writing updated cluster config ...
	I1127 11:19:44.687634   80068 ssh_runner.go:195] Run: rm -f paused
	I1127 11:19:44.739427   80068 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 11:19:44.741939   80068 out.go:177] * Done! kubectl is now configured to use "addons-112776" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.181276796Z" level=info msg="Removing container: f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a" id=40934eb4-cb6a-43f4-86af-34add5960526 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.198487203Z" level=info msg="Removed container f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=40934eb4-cb6a-43f4-86af-34add5960526 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.404854409Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=12e422c9-e486-4326-a693-0eecd7842cef name=/runtime.v1.ImageService/PullImage
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.405676882Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=d54d097c-7870-4c5d-b17b-b8757a9c3215 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.406673988Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d54d097c-7870-4c5d-b17b-b8757a9c3215 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.407506419Z" level=info msg="Creating container: default/hello-world-app-5d77478584-l7k96/hello-world-app" id=d900630a-bcff-4494-b457-f62b987b4dcd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.407608154Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.485616743Z" level=info msg="Created container 3c53beedf74e99e73e08855c6058d3e97c2805cad60e6a88785ecd924b9706c5: default/hello-world-app-5d77478584-l7k96/hello-world-app" id=d900630a-bcff-4494-b457-f62b987b4dcd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.486274895Z" level=info msg="Starting container: 3c53beedf74e99e73e08855c6058d3e97c2805cad60e6a88785ecd924b9706c5" id=6472dfa4-abf7-4589-8aa7-d8643daf21c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 11:22:28 addons-112776 crio[944]: time="2023-11-27 11:22:28.494611809Z" level=info msg="Started container" PID=10989 containerID=3c53beedf74e99e73e08855c6058d3e97c2805cad60e6a88785ecd924b9706c5 description=default/hello-world-app-5d77478584-l7k96/hello-world-app id=6472dfa4-abf7-4589-8aa7-d8643daf21c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=629befbd1c8281854c7b3d48a2e776dc7af0feb0363b5cea60df05d821f17b70
	Nov 27 11:22:29 addons-112776 crio[944]: time="2023-11-27 11:22:29.751776783Z" level=info msg="Stopping container: 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810 (timeout: 2s)" id=e08ae210-5cd1-4a95-91c1-bc80b03d43ac name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.760952421Z" level=warning msg="Stopping container 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=e08ae210-5cd1-4a95-91c1-bc80b03d43ac name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 11:22:31 addons-112776 conmon[6390]: conmon 828d33134683c6c129b6 <ninfo>: container 6402 exited with status 137
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.905629095Z" level=info msg="Stopped container 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810: ingress-nginx/ingress-nginx-controller-7c6974c4d8-bz794/controller" id=e08ae210-5cd1-4a95-91c1-bc80b03d43ac name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.906181423Z" level=info msg="Stopping pod sandbox: 9dd88d6ac0411fed6195272df1da20146e0914f056a89f44a627124a4f413978" id=47736fd2-5708-4e2b-84c2-40ede66e328c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.909331302Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-RJHMLEQ7LIRR3ZVA - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-TZOBZAUM4RZGYE6L - [0:0]\n-X KUBE-HP-RJHMLEQ7LIRR3ZVA\n-X KUBE-HP-TZOBZAUM4RZGYE6L\nCOMMIT\n"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.910718929Z" level=info msg="Closing host port tcp:80"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.910762046Z" level=info msg="Closing host port tcp:443"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.912226838Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.912245691Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.912377210Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-bz794 Namespace:ingress-nginx ID:9dd88d6ac0411fed6195272df1da20146e0914f056a89f44a627124a4f413978 UID:b76802e2-0511-42c6-8551-0ba110a12957 NetNS:/var/run/netns/f599ab14-ea1c-49ca-a6c1-75b821b1cdde Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.912495209Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-bz794 from CNI network \"kindnet\" (type=ptp)"
	Nov 27 11:22:31 addons-112776 crio[944]: time="2023-11-27 11:22:31.945409304Z" level=info msg="Stopped pod sandbox: 9dd88d6ac0411fed6195272df1da20146e0914f056a89f44a627124a4f413978" id=47736fd2-5708-4e2b-84c2-40ede66e328c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 11:22:32 addons-112776 crio[944]: time="2023-11-27 11:22:32.193692818Z" level=info msg="Removing container: 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810" id=e227f21b-3e94-489e-9c5f-4ab1fd184b3a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 11:22:32 addons-112776 crio[944]: time="2023-11-27 11:22:32.210648719Z" level=info msg="Removed container 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810: ingress-nginx/ingress-nginx-controller-7c6974c4d8-bz794/controller" id=e227f21b-3e94-489e-9c5f-4ab1fd184b3a name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3c53beedf74e9       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   629befbd1c828       hello-world-app-5d77478584-l7k96
	4d13d95213b7a       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   77e0bc206234e       nginx
	702ee18cddbd4       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   02573a3e37d69       headlamp-777fd4b855-qjdms
	03be926a9d4ea       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     3                   c223ead7467bf       ingress-nginx-admission-patch-t76z9
	15f11a20c193b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   f5aabb36687f4       gcp-auth-d4c87556c-stpzr
	7464aedb29500       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   7fc5d4c84212e       ingress-nginx-admission-create-nvkxh
	d9b90d0956d67       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   8d8dac895b4f2       local-path-provisioner-78b46b4d5c-k878v
	c2fc1def31a3d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   1f14aea3e43a2       coredns-5dd5756b68-fpndh
	82e96b4791036       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   f416ffae13188       storage-provisioner
	37d0216f66567       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   b74bdc8c235df       kindnet-fkm7v
	a0ade0548e790       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   52dee9dc3efbe       kube-proxy-g8gm6
	343a75521d39d       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   0a16c829f8d16       kube-scheduler-addons-112776
	5140d71d6ffab       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   3378463ecd33e       kube-controller-manager-addons-112776
	627ad779e391c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   9c0227cec8878       kube-apiserver-addons-112776
	c54055175a4dd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   bd910888a91ec       etcd-addons-112776
	
	* 
	* ==> coredns [c2fc1def31a3da49962cdc2f04b73bfdf033debc566550bbba74accbc8a50c9f] <==
	* [INFO] 10.244.0.17:37056 - 41089 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137575s
	[INFO] 10.244.0.17:34513 - 38397 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006225027s
	[INFO] 10.244.0.17:34513 - 44039 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.007144733s
	[INFO] 10.244.0.17:34850 - 340 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004230166s
	[INFO] 10.244.0.17:34850 - 8281 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005037484s
	[INFO] 10.244.0.17:41449 - 63603 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004929209s
	[INFO] 10.244.0.17:41449 - 63852 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006707527s
	[INFO] 10.244.0.17:56737 - 57620 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044434s
	[INFO] 10.244.0.17:56737 - 16357 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077474s
	[INFO] 10.244.0.20:57729 - 20549 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000232545s
	[INFO] 10.244.0.20:35108 - 10490 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000300096s
	[INFO] 10.244.0.20:52068 - 44085 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124862s
	[INFO] 10.244.0.20:33764 - 11665 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130105s
	[INFO] 10.244.0.20:45356 - 24913 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119567s
	[INFO] 10.244.0.20:33286 - 636 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167335s
	[INFO] 10.244.0.20:43516 - 18448 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.004767534s
	[INFO] 10.244.0.20:42623 - 15423 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007116376s
	[INFO] 10.244.0.20:50255 - 61690 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004158603s
	[INFO] 10.244.0.20:46343 - 43634 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005547526s
	[INFO] 10.244.0.20:59119 - 44568 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00464689s
	[INFO] 10.244.0.20:49628 - 57074 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004806556s
	[INFO] 10.244.0.20:58989 - 25844 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000625936s
	[INFO] 10.244.0.20:54142 - 14400 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.000722065s
	[INFO] 10.244.0.23:52095 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000227556s
	[INFO] 10.244.0.23:39002 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128468s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-112776
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-112776
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f
	                    minikube.k8s.io/name=addons-112776
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T11_17_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-112776
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 11:17:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-112776
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 11:22:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 11:20:25 +0000   Mon, 27 Nov 2023 11:17:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 11:20:25 +0000   Mon, 27 Nov 2023 11:17:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 11:20:25 +0000   Mon, 27 Nov 2023 11:17:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 11:20:25 +0000   Mon, 27 Nov 2023 11:18:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-112776
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 c38edbfc319049c693ef597fb8712efc
	  System UUID:                e7117f71-442e-4a53-8e3d-b5111205c6dd
	  Boot ID:                    70e275d9-e289-4a40-9f12-718983944527
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-l7k96           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-stpzr                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  headlamp                    headlamp-777fd4b855-qjdms                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-5dd5756b68-fpndh                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m32s
	  kube-system                 etcd-addons-112776                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kindnet-fkm7v                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m32s
	  kube-system                 kube-apiserver-addons-112776               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-controller-manager-addons-112776      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-proxy-g8gm6                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-scheduler-addons-112776               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  local-path-storage          local-path-provisioner-78b46b4d5c-k878v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node addons-112776 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node addons-112776 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x8 over 4m51s)  kubelet          Node addons-112776 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet          Node addons-112776 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet          Node addons-112776 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s                  kubelet          Node addons-112776 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m33s                  node-controller  Node addons-112776 event: Registered Node addons-112776 in Controller
	  Normal  NodeReady                3m57s                  kubelet          Node addons-112776 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000726] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000844] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000763] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000645] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000641] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000727] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000755] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.052000] kauditd_printk_skb: 36 callbacks suppressed
	[Nov27 10:51] kauditd_printk_skb: 3 callbacks suppressed
	[Nov27 10:55] kauditd_printk_skb: 2 callbacks suppressed
	[Nov27 11:20] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	[  +1.019393] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	[  +2.011796] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	[  +4.195560] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	[  +8.187218] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	[ +16.126297] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	[Nov27 11:21] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 03 73 6b 1d 75 d2 55 17 af 16 df 08 00
	
	* 
	* ==> etcd [c54055175a4dd5970fc7598af59f32611f49b993ffc9b07eaecc9aedf9656f16] <==
	* {"level":"info","ts":"2023-11-27T11:18:08.562915Z","caller":"traceutil/trace.go:171","msg":"trace[726613229] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"216.142505ms","start":"2023-11-27T11:18:08.346763Z","end":"2023-11-27T11:18:08.562906Z","steps":["trace[726613229] 'process raft request'  (duration: 215.198949ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:18:08.562982Z","caller":"traceutil/trace.go:171","msg":"trace[52598142] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"215.718025ms","start":"2023-11-27T11:18:08.347256Z","end":"2023-11-27T11:18:08.562974Z","steps":["trace[52598142] 'process raft request'  (duration: 215.508503ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:18:08.563112Z","caller":"traceutil/trace.go:171","msg":"trace[1071271272] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"216.149634ms","start":"2023-11-27T11:18:08.346955Z","end":"2023-11-27T11:18:08.563104Z","steps":["trace[1071271272] 'process raft request'  (duration: 215.039504ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:18:09.565097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.207501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-27T11:18:09.565154Z","caller":"traceutil/trace.go:171","msg":"trace[295274347] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:491; }","duration":"102.274149ms","start":"2023-11-27T11:18:09.462868Z","end":"2023-11-27T11:18:09.565142Z","steps":["trace[295274347] 'agreement among raft nodes before linearized reading'  (duration: 85.35576ms)","trace[295274347] 'range keys from in-memory index tree'  (duration: 16.832818ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-27T11:18:09.565928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.871399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-27T11:18:09.565954Z","caller":"traceutil/trace.go:171","msg":"trace[785289299] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:493; }","duration":"102.902044ms","start":"2023-11-27T11:18:09.463044Z","end":"2023-11-27T11:18:09.565946Z","steps":["trace[785289299] 'agreement among raft nodes before linearized reading'  (duration: 102.861949ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:18:09.656156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.561235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-27T11:18:09.65627Z","caller":"traceutil/trace.go:171","msg":"trace[59815459] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:498; }","duration":"103.684527ms","start":"2023-11-27T11:18:09.552573Z","end":"2023-11-27T11:18:09.656258Z","steps":["trace[59815459] 'agreement among raft nodes before linearized reading'  (duration: 103.53586ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:18:48.00165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.35647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14599"}
	{"level":"warn","ts":"2023-11-27T11:18:48.001726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.336118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11310"}
	{"level":"info","ts":"2023-11-27T11:18:48.001767Z","caller":"traceutil/trace.go:171","msg":"trace[1992624592] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:954; }","duration":"134.49451ms","start":"2023-11-27T11:18:47.867258Z","end":"2023-11-27T11:18:48.001752Z","steps":["trace[1992624592] 'range keys from in-memory index tree'  (duration: 134.229978ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:18:48.001777Z","caller":"traceutil/trace.go:171","msg":"trace[1802634114] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:954; }","duration":"136.397978ms","start":"2023-11-27T11:18:47.865368Z","end":"2023-11-27T11:18:48.001766Z","steps":["trace[1802634114] 'range keys from in-memory index tree'  (duration: 136.228807ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:18:48.001793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.497086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:90291"}
	{"level":"info","ts":"2023-11-27T11:18:48.001824Z","caller":"traceutil/trace.go:171","msg":"trace[1242241399] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:954; }","duration":"134.532966ms","start":"2023-11-27T11:18:47.867285Z","end":"2023-11-27T11:18:48.001818Z","steps":["trace[1242241399] 'range keys from in-memory index tree'  (duration: 134.286359ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:19:19.246808Z","caller":"traceutil/trace.go:171","msg":"trace[1288207857] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"101.97957ms","start":"2023-11-27T11:19:19.144805Z","end":"2023-11-27T11:19:19.246785Z","steps":["trace[1288207857] 'process raft request'  (duration: 101.824539ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:19:41.738772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.680683ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025429990037727 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" mod_revision:1142 > success:<request_put:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" value_size:818 >> failure:<request_range:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-27T11:19:41.739043Z","caller":"traceutil/trace.go:171","msg":"trace[725233832] transaction","detail":"{read_only:false; response_revision:1212; number_of_response:1; }","duration":"185.817272ms","start":"2023-11-27T11:19:41.553214Z","end":"2023-11-27T11:19:41.739031Z","steps":["trace[725233832] 'process raft request'  (duration: 185.740803ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:19:41.739037Z","caller":"traceutil/trace.go:171","msg":"trace[1830275728] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"186.187254ms","start":"2023-11-27T11:19:41.55283Z","end":"2023-11-27T11:19:41.739017Z","steps":["trace[1830275728] 'process raft request'  (duration: 186.043651ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:19:41.739216Z","caller":"traceutil/trace.go:171","msg":"trace[917211720] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"185.615695ms","start":"2023-11-27T11:19:41.553592Z","end":"2023-11-27T11:19:41.739208Z","steps":["trace[917211720] 'process raft request'  (duration: 185.405764ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T11:19:41.739097Z","caller":"traceutil/trace.go:171","msg":"trace[929505766] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"186.25491ms","start":"2023-11-27T11:19:41.552834Z","end":"2023-11-27T11:19:41.739089Z","steps":["trace[929505766] 'process raft request'  (duration: 57.167181ms)","trace[929505766] 'compare'  (duration: 128.581334ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T11:19:41.739333Z","caller":"traceutil/trace.go:171","msg":"trace[1346849949] linearizableReadLoop","detail":"{readStateIndex:1255; appliedIndex:1250; }","duration":"149.399307ms","start":"2023-11-27T11:19:41.589903Z","end":"2023-11-27T11:19:41.739303Z","steps":["trace[1346849949] 'read index received'  (duration: 20.209013ms)","trace[1346849949] 'applied index is now lower than readState.Index'  (duration: 129.189536ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T11:19:41.73936Z","caller":"traceutil/trace.go:171","msg":"trace[356771250] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"184.903479ms","start":"2023-11-27T11:19:41.554445Z","end":"2023-11-27T11:19:41.739349Z","steps":["trace[356771250] 'process raft request'  (duration: 184.701401ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T11:19:41.739439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.545153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2023-11-27T11:19:41.739474Z","caller":"traceutil/trace.go:171","msg":"trace[1079211161] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1214; }","duration":"149.581487ms","start":"2023-11-27T11:19:41.589877Z","end":"2023-11-27T11:19:41.739458Z","steps":["trace[1079211161] 'agreement among raft nodes before linearized reading'  (duration: 149.492729ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [15f11a20c193bb81d1139eab8db827d28b517271f2494ee3b002bda3ac853afd] <==
	* 2023/11/27 11:19:18 GCP Auth Webhook started!
	2023/11/27 11:19:45 Ready to marshal response ...
	2023/11/27 11:19:45 Ready to write response ...
	2023/11/27 11:19:45 Ready to marshal response ...
	2023/11/27 11:19:45 Ready to write response ...
	2023/11/27 11:19:54 Ready to marshal response ...
	2023/11/27 11:19:54 Ready to write response ...
	2023/11/27 11:19:55 Ready to marshal response ...
	2023/11/27 11:19:55 Ready to write response ...
	2023/11/27 11:19:57 Ready to marshal response ...
	2023/11/27 11:19:57 Ready to write response ...
	2023/11/27 11:20:01 Ready to marshal response ...
	2023/11/27 11:20:01 Ready to write response ...
	2023/11/27 11:20:01 Ready to marshal response ...
	2023/11/27 11:20:01 Ready to write response ...
	2023/11/27 11:20:01 Ready to marshal response ...
	2023/11/27 11:20:01 Ready to write response ...
	2023/11/27 11:20:01 Ready to marshal response ...
	2023/11/27 11:20:01 Ready to write response ...
	2023/11/27 11:20:13 Ready to marshal response ...
	2023/11/27 11:20:13 Ready to write response ...
	2023/11/27 11:20:35 Ready to marshal response ...
	2023/11/27 11:20:35 Ready to write response ...
	2023/11/27 11:22:26 Ready to marshal response ...
	2023/11/27 11:22:26 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:22:37 up  2:05,  0 users,  load average: 0.42, 1.14, 2.00
	Linux addons-112776 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [37d0216f66567597c43ba55e20787f79e7cfc587940a0e058fcfe481830edc32] <==
	* I1127 11:20:29.212811       1 main.go:227] handling current node
	I1127 11:20:39.217206       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:20:39.217231       1 main.go:227] handling current node
	I1127 11:20:49.227006       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:20:49.227033       1 main.go:227] handling current node
	I1127 11:20:59.231202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:20:59.231230       1 main.go:227] handling current node
	I1127 11:21:09.235342       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:21:09.235366       1 main.go:227] handling current node
	I1127 11:21:19.239303       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:21:19.239329       1 main.go:227] handling current node
	I1127 11:21:29.249798       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:21:29.249817       1 main.go:227] handling current node
	I1127 11:21:39.253924       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:21:39.253947       1 main.go:227] handling current node
	I1127 11:21:49.262858       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:21:49.262879       1 main.go:227] handling current node
	I1127 11:21:59.268440       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:21:59.268462       1 main.go:227] handling current node
	I1127 11:22:09.271852       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:22:09.271876       1 main.go:227] handling current node
	I1127 11:22:19.275741       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:22:19.275763       1 main.go:227] handling current node
	I1127 11:22:29.285751       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:22:29.285772       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [627ad779e391c7d89ec0ca5220bf81d5c622af5e1ff359c03b3a04d0bd5714ea] <==
	* I1127 11:20:01.689474       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1127 11:20:01.974287       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.232.248"}
	I1127 11:20:25.557625       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1127 11:20:45.333364       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1127 11:20:50.309232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.309280       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 11:20:50.316865       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.316942       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 11:20:50.324988       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.325130       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 11:20:50.325584       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.325791       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 11:20:50.346828       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.347027       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 11:20:50.440801       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.442056       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1127 11:20:50.448520       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	I1127 11:20:50.449847       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.449903       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 11:20:50.472720       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 11:20:50.472751       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1127 11:20:51.326682       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1127 11:20:51.472881       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1127 11:20:51.511712       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1127 11:22:27.110284       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.125.140"}
	
	* 
	* ==> kube-controller-manager [5140d71d6ffabf4fc694a4f9d0c835b7600995c74e4873dfb1821ec2a07082db] <==
	* W1127 11:21:28.351185       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:21:28.351215       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 11:21:34.196253       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:21:34.196287       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 11:21:51.010697       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:21:51.010727       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 11:22:07.085729       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:22:07.085769       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 11:22:11.893387       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:22:11.893416       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 11:22:12.111914       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:22:12.111951       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 11:22:26.956808       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1127 11:22:26.971135       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-l7k96"
	I1127 11:22:26.976691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.087874ms"
	I1127 11:22:26.984683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.945985ms"
	I1127 11:22:26.984826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="34.874µs"
	I1127 11:22:26.985473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.694µs"
	W1127 11:22:27.262547       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 11:22:27.262579       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 11:22:28.733788       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1127 11:22:28.740775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.937µs"
	I1127 11:22:28.743144       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1127 11:22:29.199550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.935691ms"
	I1127 11:22:29.199648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.867µs"
	
	* 
	* ==> kube-proxy [a0ade0548e790cee097b33a4fc0a1067c7ccad61f7795ef8886da0fa7f16591c] <==
	* I1127 11:18:08.948357       1 server_others.go:69] "Using iptables proxy"
	I1127 11:18:09.065751       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1127 11:18:09.561527       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 11:18:09.653339       1 server_others.go:152] "Using iptables Proxier"
	I1127 11:18:09.653460       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 11:18:09.653495       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 11:18:09.653535       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 11:18:09.653803       1 server.go:846] "Version info" version="v1.28.4"
	I1127 11:18:09.653824       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 11:18:09.654448       1 config.go:315] "Starting node config controller"
	I1127 11:18:09.654469       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 11:18:09.654829       1 config.go:188] "Starting service config controller"
	I1127 11:18:09.655015       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 11:18:09.654985       1 config.go:97] "Starting endpoint slice config controller"
	I1127 11:18:09.655067       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 11:18:09.758958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1127 11:18:09.764837       1 shared_informer.go:318] Caches are synced for service config
	I1127 11:18:09.765572       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [343a75521d39d91b3a9ce8c800b87c9e150bfb0554fc842336a8392fed78cd7c] <==
	* E1127 11:17:49.055701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:17:49.055654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 11:17:49.055721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 11:17:49.055739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1127 11:17:49.055403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 11:17:49.055766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 11:17:49.055477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:17:49.055789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 11:17:49.055516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 11:17:49.055806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 11:17:49.055593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 11:17:49.055821       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 11:17:49.055615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:17:49.055841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 11:17:49.055622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 11:17:49.055858       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 11:17:49.055347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 11:17:49.055877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 11:17:50.057160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 11:17:50.057206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 11:17:50.081095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 11:17:50.081133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 11:17:50.277407       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 11:17:50.277453       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1127 11:17:52.248798       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 11:22:27 addons-112776 kubelet[1551]: I1127 11:22:27.143765    1551 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/845a90e0-8ed1-410b-9e78-7f60585668a7-gcp-creds\") pod \"hello-world-app-5d77478584-l7k96\" (UID: \"845a90e0-8ed1-410b-9e78-7f60585668a7\") " pod="default/hello-world-app-5d77478584-l7k96"
	Nov 27 11:22:27 addons-112776 kubelet[1551]: I1127 11:22:27.143826    1551 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwz8q\" (UniqueName: \"kubernetes.io/projected/845a90e0-8ed1-410b-9e78-7f60585668a7-kube-api-access-pwz8q\") pod \"hello-world-app-5d77478584-l7k96\" (UID: \"845a90e0-8ed1-410b-9e78-7f60585668a7\") " pod="default/hello-world-app-5d77478584-l7k96"
	Nov 27 11:22:27 addons-112776 kubelet[1551]: W1127 11:22:27.372586    1551 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ac8fd910f8ca8daccde30f168451ed3a3c727365db883c1f0be5fa79ac454b74/crio-629befbd1c8281854c7b3d48a2e776dc7af0feb0363b5cea60df05d821f17b70 WatchSource:0}: Error finding container 629befbd1c8281854c7b3d48a2e776dc7af0feb0363b5cea60df05d821f17b70: Status 404 returned error can't find the container with id 629befbd1c8281854c7b3d48a2e776dc7af0feb0363b5cea60df05d821f17b70
	Nov 27 11:22:28 addons-112776 kubelet[1551]: I1127 11:22:28.050698    1551 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z52v\" (UniqueName: \"kubernetes.io/projected/dd9e6c5e-83af-4aeb-9939-0255324ec091-kube-api-access-4z52v\") pod \"dd9e6c5e-83af-4aeb-9939-0255324ec091\" (UID: \"dd9e6c5e-83af-4aeb-9939-0255324ec091\") "
	Nov 27 11:22:28 addons-112776 kubelet[1551]: I1127 11:22:28.052528    1551 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9e6c5e-83af-4aeb-9939-0255324ec091-kube-api-access-4z52v" (OuterVolumeSpecName: "kube-api-access-4z52v") pod "dd9e6c5e-83af-4aeb-9939-0255324ec091" (UID: "dd9e6c5e-83af-4aeb-9939-0255324ec091"). InnerVolumeSpecName "kube-api-access-4z52v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 11:22:28 addons-112776 kubelet[1551]: I1127 11:22:28.151151    1551 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4z52v\" (UniqueName: \"kubernetes.io/projected/dd9e6c5e-83af-4aeb-9939-0255324ec091-kube-api-access-4z52v\") on node \"addons-112776\" DevicePath \"\""
	Nov 27 11:22:28 addons-112776 kubelet[1551]: I1127 11:22:28.180148    1551 scope.go:117] "RemoveContainer" containerID="f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a"
	Nov 27 11:22:28 addons-112776 kubelet[1551]: I1127 11:22:28.198722    1551 scope.go:117] "RemoveContainer" containerID="f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a"
	Nov 27 11:22:28 addons-112776 kubelet[1551]: E1127 11:22:28.199110    1551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a\": container with ID starting with f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a not found: ID does not exist" containerID="f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a"
	Nov 27 11:22:28 addons-112776 kubelet[1551]: I1127 11:22:28.199162    1551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a"} err="failed to get container status \"f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a\": rpc error: code = NotFound desc = could not find container \"f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a\": container with ID starting with f0edd138c8d068ace70ef127cfa3dbeb6f750cc1817d4192b3916811d6fc680a not found: ID does not exist"
	Nov 27 11:22:29 addons-112776 kubelet[1551]: I1127 11:22:29.193924    1551 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-l7k96" podStartSLOduration=2.164586756 podCreationTimestamp="2023-11-27 11:22:26 +0000 UTC" firstStartedPulling="2023-11-27 11:22:27.375866714 +0000 UTC m=+275.625509635" lastFinishedPulling="2023-11-27 11:22:28.405163624 +0000 UTC m=+276.654806545" observedRunningTime="2023-11-27 11:22:29.193428495 +0000 UTC m=+277.443071423" watchObservedRunningTime="2023-11-27 11:22:29.193883666 +0000 UTC m=+277.443526592"
	Nov 27 11:22:29 addons-112776 kubelet[1551]: I1127 11:22:29.867421    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="90c20dfd-ccbc-4268-aa1c-712fd30fa7f2" path="/var/lib/kubelet/pods/90c20dfd-ccbc-4268-aa1c-712fd30fa7f2/volumes"
	Nov 27 11:22:29 addons-112776 kubelet[1551]: I1127 11:22:29.867841    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ab26334d-b8f4-4739-8d87-90d2fea9973b" path="/var/lib/kubelet/pods/ab26334d-b8f4-4739-8d87-90d2fea9973b/volumes"
	Nov 27 11:22:29 addons-112776 kubelet[1551]: I1127 11:22:29.868170    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dd9e6c5e-83af-4aeb-9939-0255324ec091" path="/var/lib/kubelet/pods/dd9e6c5e-83af-4aeb-9939-0255324ec091/volumes"
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.076834    1551 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b76802e2-0511-42c6-8551-0ba110a12957-webhook-cert\") pod \"b76802e2-0511-42c6-8551-0ba110a12957\" (UID: \"b76802e2-0511-42c6-8551-0ba110a12957\") "
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.076919    1551 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tll6f\" (UniqueName: \"kubernetes.io/projected/b76802e2-0511-42c6-8551-0ba110a12957-kube-api-access-tll6f\") pod \"b76802e2-0511-42c6-8551-0ba110a12957\" (UID: \"b76802e2-0511-42c6-8551-0ba110a12957\") "
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.078883    1551 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76802e2-0511-42c6-8551-0ba110a12957-kube-api-access-tll6f" (OuterVolumeSpecName: "kube-api-access-tll6f") pod "b76802e2-0511-42c6-8551-0ba110a12957" (UID: "b76802e2-0511-42c6-8551-0ba110a12957"). InnerVolumeSpecName "kube-api-access-tll6f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.079013    1551 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76802e2-0511-42c6-8551-0ba110a12957-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b76802e2-0511-42c6-8551-0ba110a12957" (UID: "b76802e2-0511-42c6-8551-0ba110a12957"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.177445    1551 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b76802e2-0511-42c6-8551-0ba110a12957-webhook-cert\") on node \"addons-112776\" DevicePath \"\""
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.177491    1551 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tll6f\" (UniqueName: \"kubernetes.io/projected/b76802e2-0511-42c6-8551-0ba110a12957-kube-api-access-tll6f\") on node \"addons-112776\" DevicePath \"\""
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.192688    1551 scope.go:117] "RemoveContainer" containerID="828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810"
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.210892    1551 scope.go:117] "RemoveContainer" containerID="828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810"
	Nov 27 11:22:32 addons-112776 kubelet[1551]: E1127 11:22:32.211262    1551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810\": container with ID starting with 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810 not found: ID does not exist" containerID="828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810"
	Nov 27 11:22:32 addons-112776 kubelet[1551]: I1127 11:22:32.211314    1551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810"} err="failed to get container status \"828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810\": rpc error: code = NotFound desc = could not find container \"828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810\": container with ID starting with 828d33134683c6c129b6522c6028c3d0b603f0e863d0f7c91bac1492539e2810 not found: ID does not exist"
	Nov 27 11:22:33 addons-112776 kubelet[1551]: I1127 11:22:33.867570    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b76802e2-0511-42c6-8551-0ba110a12957" path="/var/lib/kubelet/pods/b76802e2-0511-42c6-8551-0ba110a12957/volumes"
	
	* 
	* ==> storage-provisioner [82e96b47910366c1a874e831a55a7fb26c227217b4d43cafad922150da97a753] <==
	* I1127 11:18:41.056085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 11:18:41.158523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 11:18:41.158581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 11:18:41.165955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 11:18:41.166111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-112776_9df56f90-1f02-439f-8cfb-beace12d203f!
	I1127 11:18:41.166145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38539c1e-1247-45fb-acdf-dbee6ad0d8b7", APIVersion:"v1", ResourceVersion:"900", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-112776_9df56f90-1f02-439f-8cfb-beace12d203f became leader
	I1127 11:18:41.267266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-112776_9df56f90-1f02-439f-8cfb-beace12d203f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-112776 -n addons-112776
helpers_test.go:261: (dbg) Run:  kubectl --context addons-112776 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 config get cpus: exit status 14 (121.178491ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 config get cpus
functional_test.go:1206: expected config error for "out/minikube-linux-amd64 -p functional-876444 config get cpus" to be -""- but got *"E1127 11:25:54.628761  111360 logFile.go:53] failed to close the audit log: invalid argument"*
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 config get cpus: exit status 14 (88.157539ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.13851053s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-876444
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image load --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr
2023/11/27 11:26:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image load --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr: (7.699917598s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image ls: (2.260818056s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-876444" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-123827 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-123827 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.586229887s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-123827 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-123827 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5b389f58-64d4-4391-94f9-99b39b32c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5b389f58-64d4-4391-94f9-99b39b32c8b8] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.008382325s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1127 11:29:44.765823   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:30:12.451057   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-123827 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.674777324s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-123827 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1127 11:30:54.780056   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:54.785383   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:54.795653   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:54.816012   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:54.856308   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:54.936635   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:55.097118   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:55.417744   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:56.058733   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:57.339300   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:30:59.901157   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:31:05.021778   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.008355266s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons disable ingress-dns --alsologtostderr -v=1: (1.939917779s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons disable ingress --alsologtostderr -v=1
E1127 11:31:15.262577   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons disable ingress --alsologtostderr -v=1: (7.425837278s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-123827
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-123827:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b",
	        "Created": "2023-11-27T11:26:59.549702015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T11:26:59.809945453Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b/hostname",
	        "HostsPath": "/var/lib/docker/containers/33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b/hosts",
	        "LogPath": "/var/lib/docker/containers/33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b/33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b-json.log",
	        "Name": "/ingress-addon-legacy-123827",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-123827:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-123827",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2bf758177902c1e1e1a5356f647247b177e9d3b15201b2fe459f55b6e7a7d555-init/diff:/var/lib/docker/overlay2/6890504cd609c764c809309abb3d72eb8ac39b0411e6657ccda2a2f23689cb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2bf758177902c1e1e1a5356f647247b177e9d3b15201b2fe459f55b6e7a7d555/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2bf758177902c1e1e1a5356f647247b177e9d3b15201b2fe459f55b6e7a7d555/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2bf758177902c1e1e1a5356f647247b177e9d3b15201b2fe459f55b6e7a7d555/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-123827",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-123827/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-123827",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-123827",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-123827",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d41444663ab0c361b9d1518e515d85cab26568bf79c36481be8297f7e2b028b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d41444663ab0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-123827": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "33fe80a0b879",
	                        "ingress-addon-legacy-123827"
	                    ],
	                    "NetworkID": "2f1029692e328e404192596f6fee81f0b8f4d4f58d824871b1a87882aee27e36",
	                    "EndpointID": "352de15a120b60f0bb3f374dafc00ce34412cb2f65e897ba712aaca24842af56",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-123827 -n ingress-addon-legacy-123827
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-123827 logs -n 25: (1.099715493s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-876444 image rm                                                   | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-876444                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444 image ls                                                   | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	| image   | functional-876444 image ls                                                   | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	| image   | functional-876444 image save --daemon                                        | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-876444                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444                                                            | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC |                     |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-876444 ssh pgrep                                                  | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-876444 image load                                                 | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444 image ls                                                   | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	| image   | functional-876444                                                            | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444 image save --daemon                                        | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-876444                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444                                                            | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-876444 ssh pgrep                                                  | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-876444                                                            | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC |                     |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444                                                            | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444                                                            | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-876444 image build -t                                             | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	|         | localhost/my-image:functional-876444                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-876444 image ls                                                   | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	| delete  | -p functional-876444                                                         | functional-876444           | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:26 UTC |
	| start   | -p ingress-addon-legacy-123827                                               | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:26 UTC | 27 Nov 23 11:28 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-123827                                                  | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:28 UTC | 27 Nov 23 11:28 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-123827                                                  | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:28 UTC | 27 Nov 23 11:28 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-123827                                                  | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:28 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-123827 ip                                               | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:30 UTC | 27 Nov 23 11:30 UTC |
	| addons  | ingress-addon-legacy-123827                                                  | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:31 UTC | 27 Nov 23 11:31 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-123827                                                  | ingress-addon-legacy-123827 | jenkins | v1.32.0 | 27 Nov 23 11:31 UTC | 27 Nov 23 11:31 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:26:45
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:26:45.879096  119811 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:26:45.879417  119811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:45.879429  119811 out.go:309] Setting ErrFile to fd 2...
	I1127 11:26:45.879433  119811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:45.879710  119811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:26:45.880357  119811 out.go:303] Setting JSON to false
	I1127 11:26:45.881352  119811 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7759,"bootTime":1701076647,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:26:45.881420  119811 start.go:138] virtualization: kvm guest
	I1127 11:26:45.883753  119811 out.go:177] * [ingress-addon-legacy-123827] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:26:45.885222  119811 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:26:45.886570  119811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:26:45.885238  119811 notify.go:220] Checking for updates...
	I1127 11:26:45.889268  119811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:26:45.890798  119811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:26:45.892177  119811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:26:45.893597  119811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:26:45.895181  119811 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:26:45.918182  119811 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:26:45.918259  119811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:26:45.975116  119811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-27 11:26:45.966884329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:26:45.975213  119811 docker.go:295] overlay module found
	I1127 11:26:45.977136  119811 out.go:177] * Using the docker driver based on user configuration
	I1127 11:26:45.978443  119811 start.go:298] selected driver: docker
	I1127 11:26:45.978452  119811 start.go:902] validating driver "docker" against <nil>
	I1127 11:26:45.978463  119811 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:26:45.979203  119811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:26:46.032779  119811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-27 11:26:46.024900665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:26:46.032958  119811 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 11:26:46.033178  119811 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 11:26:46.035192  119811 out.go:177] * Using Docker driver with root privileges
	I1127 11:26:46.036962  119811 cni.go:84] Creating CNI manager for ""
	I1127 11:26:46.036988  119811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:26:46.037004  119811 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 11:26:46.037018  119811 start_flags.go:323] config:
	{Name:ingress-addon-legacy-123827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123827 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:26:46.038769  119811 out.go:177] * Starting control plane node ingress-addon-legacy-123827 in cluster ingress-addon-legacy-123827
	I1127 11:26:46.040291  119811 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:26:46.041737  119811 out.go:177] * Pulling base image ...
	I1127 11:26:46.043275  119811 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 11:26:46.043378  119811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:26:46.059797  119811 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 11:26:46.059827  119811 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 11:26:46.068296  119811 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1127 11:26:46.068328  119811 cache.go:56] Caching tarball of preloaded images
	I1127 11:26:46.068527  119811 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 11:26:46.070430  119811 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1127 11:26:46.072006  119811 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:26:46.099171  119811 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1127 11:26:51.331357  119811 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:26:51.331468  119811 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:26:52.341261  119811 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1127 11:26:52.341661  119811 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/config.json ...
	I1127 11:26:52.341696  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/config.json: {Name:mkc7d7160fc725b99f0e03c4c5cc873bb44073d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:26:52.341885  119811 cache.go:194] Successfully downloaded all kic artifacts
	I1127 11:26:52.341916  119811 start.go:365] acquiring machines lock for ingress-addon-legacy-123827: {Name:mk558c44a1a7c3546e5cbad4fd1b7dbfa464be70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:26:52.341962  119811 start.go:369] acquired machines lock for "ingress-addon-legacy-123827" in 34.608µs
	I1127 11:26:52.341982  119811 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-123827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123827 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 11:26:52.342053  119811 start.go:125] createHost starting for "" (driver="docker")
	I1127 11:26:52.344574  119811 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1127 11:26:52.344800  119811 start.go:159] libmachine.API.Create for "ingress-addon-legacy-123827" (driver="docker")
	I1127 11:26:52.344830  119811 client.go:168] LocalClient.Create starting
	I1127 11:26:52.344896  119811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem
	I1127 11:26:52.344927  119811 main.go:141] libmachine: Decoding PEM data...
	I1127 11:26:52.344944  119811 main.go:141] libmachine: Parsing certificate...
	I1127 11:26:52.344998  119811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem
	I1127 11:26:52.345020  119811 main.go:141] libmachine: Decoding PEM data...
	I1127 11:26:52.345028  119811 main.go:141] libmachine: Parsing certificate...
	I1127 11:26:52.345316  119811 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 11:26:52.361573  119811 cli_runner.go:211] docker network inspect ingress-addon-legacy-123827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 11:26:52.361655  119811 network_create.go:281] running [docker network inspect ingress-addon-legacy-123827] to gather additional debugging logs...
	I1127 11:26:52.361675  119811 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123827
	W1127 11:26:52.376806  119811 cli_runner.go:211] docker network inspect ingress-addon-legacy-123827 returned with exit code 1
	I1127 11:26:52.376841  119811 network_create.go:284] error running [docker network inspect ingress-addon-legacy-123827]: docker network inspect ingress-addon-legacy-123827: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-123827 not found
	I1127 11:26:52.376857  119811 network_create.go:286] output of [docker network inspect ingress-addon-legacy-123827]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-123827 not found
	
	** /stderr **
	I1127 11:26:52.376977  119811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:26:52.392588  119811 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0028127c0}
	I1127 11:26:52.392637  119811 network_create.go:124] attempt to create docker network ingress-addon-legacy-123827 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 11:26:52.392710  119811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-123827 ingress-addon-legacy-123827
	I1127 11:26:52.443201  119811 network_create.go:108] docker network ingress-addon-legacy-123827 192.168.49.0/24 created
	I1127 11:26:52.443263  119811 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-123827" container
	I1127 11:26:52.443362  119811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 11:26:52.458773  119811 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-123827 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123827 --label created_by.minikube.sigs.k8s.io=true
	I1127 11:26:52.475780  119811 oci.go:103] Successfully created a docker volume ingress-addon-legacy-123827
	I1127 11:26:52.475867  119811 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-123827-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123827 --entrypoint /usr/bin/test -v ingress-addon-legacy-123827:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 11:26:54.209661  119811 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-123827-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123827 --entrypoint /usr/bin/test -v ingress-addon-legacy-123827:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.733732683s)
	I1127 11:26:54.209694  119811 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-123827
	I1127 11:26:54.209714  119811 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 11:26:54.209735  119811 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 11:26:54.209801  119811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-123827:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 11:26:59.484759  119811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-123827:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.274906822s)
	I1127 11:26:59.484792  119811 kic.go:203] duration metric: took 5.275055 seconds to extract preloaded images to volume
	W1127 11:26:59.484925  119811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 11:26:59.485010  119811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 11:26:59.534614  119811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-123827 --name ingress-addon-legacy-123827 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123827 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-123827 --network ingress-addon-legacy-123827 --ip 192.168.49.2 --volume ingress-addon-legacy-123827:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 11:26:59.818149  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Running}}
	I1127 11:26:59.836211  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Status}}
	I1127 11:26:59.853652  119811 cli_runner.go:164] Run: docker exec ingress-addon-legacy-123827 stat /var/lib/dpkg/alternatives/iptables
	I1127 11:26:59.894142  119811 oci.go:144] the created container "ingress-addon-legacy-123827" has a running status.
	I1127 11:26:59.894186  119811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa...
	I1127 11:27:00.156448  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 11:27:00.156493  119811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 11:27:00.177587  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Status}}
	I1127 11:27:00.203109  119811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 11:27:00.203133  119811 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-123827 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 11:27:00.267426  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Status}}
	I1127 11:27:00.286762  119811 machine.go:88] provisioning docker machine ...
	I1127 11:27:00.286812  119811 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-123827"
	I1127 11:27:00.286890  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:00.305930  119811 main.go:141] libmachine: Using SSH client type: native
	I1127 11:27:00.306341  119811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1127 11:27:00.306361  119811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-123827 && echo "ingress-addon-legacy-123827" | sudo tee /etc/hostname
	I1127 11:27:00.498892  119811 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-123827
	
	I1127 11:27:00.498993  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:00.516237  119811 main.go:141] libmachine: Using SSH client type: native
	I1127 11:27:00.516610  119811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1127 11:27:00.516632  119811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-123827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-123827/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-123827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:27:00.639906  119811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:27:00.639940  119811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17644-72381/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-72381/.minikube}
	I1127 11:27:00.639988  119811 ubuntu.go:177] setting up certificates
	I1127 11:27:00.640011  119811 provision.go:83] configureAuth start
	I1127 11:27:00.640072  119811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123827
	I1127 11:27:00.656145  119811 provision.go:138] copyHostCerts
	I1127 11:27:00.656188  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:27:00.656219  119811 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem, removing ...
	I1127 11:27:00.656231  119811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:27:00.656295  119811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem (1675 bytes)
	I1127 11:27:00.656412  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:27:00.656436  119811 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem, removing ...
	I1127 11:27:00.656444  119811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:27:00.656472  119811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem (1082 bytes)
	I1127 11:27:00.656522  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:27:00.656541  119811 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem, removing ...
	I1127 11:27:00.656545  119811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:27:00.656568  119811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem (1123 bytes)
	I1127 11:27:00.656619  119811 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-123827 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-123827]
	I1127 11:27:00.787332  119811 provision.go:172] copyRemoteCerts
	I1127 11:27:00.787399  119811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:27:00.787439  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:00.804083  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:00.892192  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 11:27:00.892262  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1127 11:27:00.913638  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 11:27:00.913722  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1127 11:27:00.935636  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 11:27:00.935722  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 11:27:00.956059  119811 provision.go:86] duration metric: configureAuth took 316.033553ms
	I1127 11:27:00.956086  119811 ubuntu.go:193] setting minikube options for container-runtime
	I1127 11:27:00.956298  119811 config.go:182] Loaded profile config "ingress-addon-legacy-123827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 11:27:00.956409  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:00.972677  119811 main.go:141] libmachine: Using SSH client type: native
	I1127 11:27:00.973008  119811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1127 11:27:00.973027  119811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 11:27:01.210878  119811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 11:27:01.210915  119811 machine.go:91] provisioned docker machine in 924.111244ms
	I1127 11:27:01.210926  119811 client.go:171] LocalClient.Create took 8.866090225s
	I1127 11:27:01.210949  119811 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-123827" took 8.866149287s
	I1127 11:27:01.210962  119811 start.go:300] post-start starting for "ingress-addon-legacy-123827" (driver="docker")
	I1127 11:27:01.210977  119811 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:27:01.211044  119811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:27:01.211097  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:01.228210  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:01.320604  119811 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:27:01.323814  119811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 11:27:01.323852  119811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 11:27:01.323864  119811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 11:27:01.323873  119811 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 11:27:01.323888  119811 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/addons for local assets ...
	I1127 11:27:01.323955  119811 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/files for local assets ...
	I1127 11:27:01.324029  119811 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
	I1127 11:27:01.324039  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> /etc/ssl/certs/791532.pem
	I1127 11:27:01.324123  119811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:27:01.332508  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:27:01.354487  119811 start.go:303] post-start completed in 143.503614ms
	I1127 11:27:01.354869  119811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123827
	I1127 11:27:01.371905  119811 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/config.json ...
	I1127 11:27:01.372170  119811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:27:01.372216  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:01.388436  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:01.480402  119811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 11:27:01.484523  119811 start.go:128] duration metric: createHost completed in 9.142456977s
	I1127 11:27:01.484546  119811 start.go:83] releasing machines lock for "ingress-addon-legacy-123827", held for 9.142573094s
	I1127 11:27:01.484600  119811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123827
	I1127 11:27:01.500341  119811 ssh_runner.go:195] Run: cat /version.json
	I1127 11:27:01.500358  119811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:27:01.500400  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:01.500418  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:01.517818  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:01.518308  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:01.689068  119811 ssh_runner.go:195] Run: systemctl --version
	I1127 11:27:01.693317  119811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 11:27:01.831758  119811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 11:27:01.836123  119811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:27:01.855337  119811 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 11:27:01.855428  119811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:27:01.884516  119811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 11:27:01.884540  119811 start.go:472] detecting cgroup driver to use...
	I1127 11:27:01.884571  119811 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 11:27:01.884611  119811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:27:01.899151  119811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:27:01.909966  119811 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:27:01.910043  119811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:27:01.923233  119811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:27:01.937387  119811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 11:27:02.014615  119811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:27:02.102401  119811 docker.go:219] disabling docker service ...
	I1127 11:27:02.102491  119811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:27:02.122583  119811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:27:02.134614  119811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:27:02.210822  119811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:27:02.298420  119811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:27:02.309852  119811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:27:02.325358  119811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 11:27:02.325412  119811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:27:02.335072  119811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 11:27:02.335156  119811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:27:02.344847  119811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:27:02.354342  119811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:27:02.364162  119811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:27:02.373528  119811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:27:02.381847  119811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:27:02.390024  119811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:27:02.472946  119811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 11:27:02.578567  119811 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 11:27:02.578649  119811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 11:27:02.582299  119811 start.go:540] Will wait 60s for crictl version
	I1127 11:27:02.582358  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:02.585860  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 11:27:02.620458  119811 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 11:27:02.620554  119811 ssh_runner.go:195] Run: crio --version
	I1127 11:27:02.656963  119811 ssh_runner.go:195] Run: crio --version
	I1127 11:27:02.695543  119811 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1127 11:27:02.697283  119811 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:27:02.714567  119811 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 11:27:02.718594  119811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:27:02.729594  119811 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 11:27:02.729681  119811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:27:02.774762  119811 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 11:27:02.774844  119811 ssh_runner.go:195] Run: which lz4
	I1127 11:27:02.778438  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1127 11:27:02.778525  119811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 11:27:02.781868  119811 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 11:27:02.781901  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1127 11:27:03.854312  119811 crio.go:444] Took 1.075808 seconds to copy over tarball
	I1127 11:27:03.854376  119811 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 11:27:06.158612  119811 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304203415s)
	I1127 11:27:06.158642  119811 crio.go:451] Took 2.304304 seconds to extract the tarball
	I1127 11:27:06.158651  119811 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 11:27:06.228918  119811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:27:06.260775  119811 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 11:27:06.260802  119811 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1127 11:27:06.260871  119811 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 11:27:06.260894  119811 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1127 11:27:06.260913  119811 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 11:27:06.260934  119811 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 11:27:06.260959  119811 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1127 11:27:06.260876  119811 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:27:06.261098  119811 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1127 11:27:06.261103  119811 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 11:27:06.262360  119811 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 11:27:06.262369  119811 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 11:27:06.262388  119811 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 11:27:06.262361  119811 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1127 11:27:06.262359  119811 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 11:27:06.262363  119811 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1127 11:27:06.262364  119811 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1127 11:27:06.262642  119811 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:27:06.399743  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 11:27:06.403473  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1127 11:27:06.408674  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1127 11:27:06.412046  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1127 11:27:06.412446  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1127 11:27:06.419759  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1127 11:27:06.450054  119811 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1127 11:27:06.450150  119811 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 11:27:06.450194  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.454135  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1127 11:27:06.459205  119811 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1127 11:27:06.459255  119811 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 11:27:06.459313  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.467801  119811 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1127 11:27:06.467875  119811 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 11:27:06.467925  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.467945  119811 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1127 11:27:06.468000  119811 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1127 11:27:06.468041  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.468352  119811 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1127 11:27:06.468390  119811 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1127 11:27:06.468426  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.477145  119811 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1127 11:27:06.477194  119811 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 11:27:06.477244  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 11:27:06.477247  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.555465  119811 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1127 11:27:06.555512  119811 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1127 11:27:06.555546  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1127 11:27:06.555557  119811 ssh_runner.go:195] Run: which crictl
	I1127 11:27:06.555592  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1127 11:27:06.555652  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1127 11:27:06.555677  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1127 11:27:06.577214  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1127 11:27:06.577294  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1127 11:27:06.577314  119811 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1127 11:27:06.666212  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1127 11:27:06.666264  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1127 11:27:06.666321  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1127 11:27:06.666393  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1127 11:27:06.677595  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1127 11:27:06.677635  119811 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1127 11:27:06.867872  119811 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:27:07.003889  119811 cache_images.go:92] LoadImages completed in 743.066169ms
	W1127 11:27:07.003977  119811 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1127 11:27:07.004062  119811 ssh_runner.go:195] Run: crio config
	I1127 11:27:07.050674  119811 cni.go:84] Creating CNI manager for ""
	I1127 11:27:07.050697  119811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:27:07.050713  119811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 11:27:07.050731  119811 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-123827 NodeName:ingress-addon-legacy-123827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1127 11:27:07.050858  119811 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-123827"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 11:27:07.050927  119811 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-123827 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123827 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 11:27:07.050981  119811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1127 11:27:07.059166  119811 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 11:27:07.059253  119811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 11:27:07.067265  119811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1127 11:27:07.083471  119811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1127 11:27:07.099713  119811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1127 11:27:07.116682  119811 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 11:27:07.120367  119811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:27:07.130614  119811 certs.go:56] Setting up /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827 for IP: 192.168.49.2
	I1127 11:27:07.130661  119811 certs.go:190] acquiring lock for shared ca certs: {Name:mk5858a15575801c48b8e08b34d7442dd346ca1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.130880  119811 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key
	I1127 11:27:07.130955  119811 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key
	I1127 11:27:07.131012  119811 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.key
	I1127 11:27:07.131028  119811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt with IP's: []
	I1127 11:27:07.214267  119811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt ...
	I1127 11:27:07.214307  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: {Name:mka45b7cf72b9ee9746507eb7ef8a0b396137d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.214515  119811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.key ...
	I1127 11:27:07.214532  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.key: {Name:mke5a5636b11e7f26cd3ffd6a1f06b82ba1b6ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.214649  119811 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key.dd3b5fb2
	I1127 11:27:07.214668  119811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 11:27:07.289835  119811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt.dd3b5fb2 ...
	I1127 11:27:07.289877  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt.dd3b5fb2: {Name:mk6c0063042eeab65cba1035a778a5d0fe9ad0a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.290085  119811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key.dd3b5fb2 ...
	I1127 11:27:07.290105  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key.dd3b5fb2: {Name:mka73da8fc83492bb6f1e50988a63d2cc5f75ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.290200  119811 certs.go:337] copying /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt
	I1127 11:27:07.290336  119811 certs.go:341] copying /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key
	I1127 11:27:07.290429  119811 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.key
	I1127 11:27:07.290450  119811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.crt with IP's: []
	I1127 11:27:07.401147  119811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.crt ...
	I1127 11:27:07.401189  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.crt: {Name:mk75855a03be086dc4a3ecdc585e4dbdf206cbfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.401390  119811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.key ...
	I1127 11:27:07.401409  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.key: {Name:mk8b7fa446474d98ae8944d508757c0358424053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:07.401511  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 11:27:07.401541  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 11:27:07.401568  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 11:27:07.401588  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 11:27:07.401620  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 11:27:07.401640  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 11:27:07.401660  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 11:27:07.401693  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 11:27:07.401763  119811 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem (1338 bytes)
	W1127 11:27:07.401815  119811 certs.go:433] ignoring /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153_empty.pem, impossibly tiny 0 bytes
	I1127 11:27:07.401836  119811 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 11:27:07.401874  119811 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem (1082 bytes)
	I1127 11:27:07.401913  119811 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem (1123 bytes)
	I1127 11:27:07.401952  119811 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem (1675 bytes)
	I1127 11:27:07.402014  119811 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:27:07.402056  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem -> /usr/share/ca-certificates/79153.pem
	I1127 11:27:07.402080  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> /usr/share/ca-certificates/791532.pem
	I1127 11:27:07.402098  119811 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:27:07.402815  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 11:27:07.425376  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 11:27:07.447166  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 11:27:07.469107  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 11:27:07.490749  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 11:27:07.512332  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 11:27:07.534347  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 11:27:07.556952  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 11:27:07.578853  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem --> /usr/share/ca-certificates/79153.pem (1338 bytes)
	I1127 11:27:07.601491  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /usr/share/ca-certificates/791532.pem (1708 bytes)
	I1127 11:27:07.623448  119811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 11:27:07.645345  119811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 11:27:07.661290  119811 ssh_runner.go:195] Run: openssl version
	I1127 11:27:07.666284  119811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79153.pem && ln -fs /usr/share/ca-certificates/79153.pem /etc/ssl/certs/79153.pem"
	I1127 11:27:07.674782  119811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79153.pem
	I1127 11:27:07.678103  119811 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 11:23 /usr/share/ca-certificates/79153.pem
	I1127 11:27:07.678159  119811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79153.pem
	I1127 11:27:07.684495  119811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79153.pem /etc/ssl/certs/51391683.0"
	I1127 11:27:07.693090  119811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/791532.pem && ln -fs /usr/share/ca-certificates/791532.pem /etc/ssl/certs/791532.pem"
	I1127 11:27:07.701698  119811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791532.pem
	I1127 11:27:07.705081  119811 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 11:23 /usr/share/ca-certificates/791532.pem
	I1127 11:27:07.705136  119811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791532.pem
	I1127 11:27:07.711486  119811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/791532.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 11:27:07.720076  119811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 11:27:07.728433  119811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:27:07.731634  119811 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 11:17 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:27:07.731694  119811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:27:07.738132  119811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 11:27:07.746897  119811 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 11:27:07.750318  119811 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:27:07.750372  119811 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-123827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123827 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:27:07.750473  119811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 11:27:07.750514  119811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 11:27:07.783057  119811 cri.go:89] found id: ""
	I1127 11:27:07.783115  119811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 11:27:07.791284  119811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:27:07.799421  119811 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 11:27:07.799489  119811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:27:07.807447  119811 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:27:07.807497  119811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 11:27:07.851192  119811 kubeadm.go:322] W1127 11:27:07.850580    1372 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1127 11:27:07.888186  119811 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 11:27:07.957455  119811 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:27:11.466861  119811 kubeadm.go:322] W1127 11:27:11.466447    1372 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 11:27:11.468900  119811 kubeadm.go:322] W1127 11:27:11.467776    1372 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 11:27:19.430348  119811 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1127 11:27:19.430445  119811 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 11:27:19.430533  119811 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 11:27:19.430597  119811 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 11:27:19.430628  119811 kubeadm.go:322] OS: Linux
	I1127 11:27:19.430666  119811 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 11:27:19.430731  119811 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 11:27:19.430776  119811 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 11:27:19.430861  119811 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 11:27:19.430904  119811 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 11:27:19.430955  119811 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 11:27:19.431014  119811 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:27:19.431139  119811 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:27:19.431262  119811 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:27:19.431387  119811 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:27:19.431515  119811 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:27:19.431583  119811 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 11:27:19.431682  119811 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:27:19.433864  119811 out.go:204]   - Generating certificates and keys ...
	I1127 11:27:19.433942  119811 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 11:27:19.434014  119811 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 11:27:19.434086  119811 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 11:27:19.434183  119811 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 11:27:19.434262  119811 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 11:27:19.434306  119811 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 11:27:19.434404  119811 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 11:27:19.434565  119811 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123827 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 11:27:19.434648  119811 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 11:27:19.434799  119811 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123827 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 11:27:19.434902  119811 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 11:27:19.434975  119811 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 11:27:19.435023  119811 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 11:27:19.435111  119811 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:27:19.435203  119811 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:27:19.435277  119811 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:27:19.435366  119811 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:27:19.435445  119811 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:27:19.435555  119811 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:27:19.437246  119811 out.go:204]   - Booting up control plane ...
	I1127 11:27:19.437346  119811 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:27:19.437422  119811 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:27:19.437496  119811 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:27:19.437584  119811 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:27:19.437776  119811 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:27:19.437872  119811 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502349 seconds
	I1127 11:27:19.437970  119811 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:27:19.438077  119811 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:27:19.438137  119811 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:27:19.438289  119811 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-123827 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1127 11:27:19.438379  119811 kubeadm.go:322] [bootstrap-token] Using token: nino0u.prf7ii7pw8amd4ub
	I1127 11:27:19.440182  119811 out.go:204]   - Configuring RBAC rules ...
	I1127 11:27:19.440304  119811 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:27:19.440403  119811 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 11:27:19.440572  119811 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:27:19.440741  119811 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:27:19.440897  119811 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:27:19.440981  119811 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:27:19.441077  119811 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 11:27:19.441144  119811 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 11:27:19.441183  119811 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 11:27:19.441189  119811 kubeadm.go:322] 
	I1127 11:27:19.441251  119811 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 11:27:19.441263  119811 kubeadm.go:322] 
	I1127 11:27:19.441329  119811 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 11:27:19.441336  119811 kubeadm.go:322] 
	I1127 11:27:19.441360  119811 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 11:27:19.441412  119811 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:27:19.441455  119811 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:27:19.441461  119811 kubeadm.go:322] 
	I1127 11:27:19.441506  119811 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 11:27:19.441598  119811 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:27:19.441671  119811 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:27:19.441679  119811 kubeadm.go:322] 
	I1127 11:27:19.441771  119811 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 11:27:19.441834  119811 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 11:27:19.441840  119811 kubeadm.go:322] 
	I1127 11:27:19.441906  119811 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nino0u.prf7ii7pw8amd4ub \
	I1127 11:27:19.441993  119811 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 \
	I1127 11:27:19.442023  119811 kubeadm.go:322]     --control-plane 
	I1127 11:27:19.442027  119811 kubeadm.go:322] 
	I1127 11:27:19.442093  119811 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:27:19.442100  119811 kubeadm.go:322] 
	I1127 11:27:19.442163  119811 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nino0u.prf7ii7pw8amd4ub \
	I1127 11:27:19.442266  119811 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 
	I1127 11:27:19.442282  119811 cni.go:84] Creating CNI manager for ""
	I1127 11:27:19.442291  119811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:27:19.443817  119811 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 11:27:19.445273  119811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 11:27:19.449335  119811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1127 11:27:19.449353  119811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 11:27:19.466507  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 11:27:19.918621  119811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:27:19.918682  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:19.918738  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f minikube.k8s.io/name=ingress-addon-legacy-123827 minikube.k8s.io/updated_at=2023_11_27T11_27_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:19.998990  119811 ops.go:34] apiserver oom_adj: -16
	I1127 11:27:19.999138  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:20.104856  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:20.702837  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:21.203173  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:21.702261  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:22.203027  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:22.702863  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:23.202402  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:23.703075  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:24.202456  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:24.703150  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:25.202341  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:25.702145  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:26.202815  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:26.702451  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:27.202217  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:27.702164  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:28.202944  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:28.702530  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:29.202481  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:29.703168  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:30.202959  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:30.702266  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:31.202722  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:31.702561  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:32.203119  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:32.702222  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:33.202184  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:33.702233  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:34.202850  119811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:27:34.271184  119811 kubeadm.go:1081] duration metric: took 14.352547016s to wait for elevateKubeSystemPrivileges.
	I1127 11:27:34.271234  119811 kubeadm.go:406] StartCluster complete in 26.520871068s
	I1127 11:27:34.271262  119811 settings.go:142] acquiring lock: {Name:mkff9c1e77c1a71ba60e8e9acbffbd8799fc8519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:34.271342  119811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:27:34.272116  119811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/kubeconfig: {Name:mke9c53ad28720f96b51e42e525b68d1097488ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:27:34.272385  119811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:27:34.272518  119811 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 11:27:34.272618  119811 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-123827"
	I1127 11:27:34.272591  119811 config.go:182] Loaded profile config "ingress-addon-legacy-123827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 11:27:34.272703  119811 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-123827"
	I1127 11:27:34.272736  119811 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-123827"
	I1127 11:27:34.272758  119811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-123827"
	I1127 11:27:34.272770  119811 host.go:66] Checking if "ingress-addon-legacy-123827" exists ...
	I1127 11:27:34.273159  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Status}}
	I1127 11:27:34.273134  119811 kapi.go:59] client config for ingress-addon-legacy-123827: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:27:34.273329  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Status}}
	I1127 11:27:34.273996  119811 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 11:27:34.292877  119811 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-123827" context rescaled to 1 replicas
	I1127 11:27:34.292927  119811 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 11:27:34.295503  119811 out.go:177] * Verifying Kubernetes components...
	I1127 11:27:34.293713  119811 kapi.go:59] client config for ingress-addon-legacy-123827: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:27:34.298398  119811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:27:34.297009  119811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:27:34.297283  119811 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-123827"
	I1127 11:27:34.300260  119811 host.go:66] Checking if "ingress-addon-legacy-123827" exists ...
	I1127 11:27:34.300371  119811 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:27:34.300390  119811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:27:34.300445  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:34.300868  119811 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123827 --format={{.State.Status}}
	I1127 11:27:34.321565  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:34.322495  119811 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:27:34.322515  119811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:27:34.322566  119811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123827
	I1127 11:27:34.338409  119811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/ingress-addon-legacy-123827/id_rsa Username:docker}
	I1127 11:27:34.385250  119811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 11:27:34.385885  119811 kapi.go:59] client config for ingress-addon-legacy-123827: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:27:34.386256  119811 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-123827" to be "Ready" ...
	I1127 11:27:34.458976  119811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:27:34.459379  119811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:27:34.852500  119811 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 11:27:35.049222  119811 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1127 11:27:35.050666  119811 addons.go:502] enable addons completed in 778.159209ms: enabled=[storage-provisioner default-storageclass]
	I1127 11:27:36.394374  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:38.394948  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:40.394978  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:42.894310  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:45.394083  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:47.394444  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:49.395162  119811 node_ready.go:58] node "ingress-addon-legacy-123827" has status "Ready":"False"
	I1127 11:27:49.894064  119811 node_ready.go:49] node "ingress-addon-legacy-123827" has status "Ready":"True"
	I1127 11:27:49.894090  119811 node_ready.go:38] duration metric: took 15.507801081s waiting for node "ingress-addon-legacy-123827" to be "Ready" ...
	I1127 11:27:49.894100  119811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:27:49.900378  119811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace to be "Ready" ...
	I1127 11:27:51.908568  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-27 11:27:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1127 11:27:54.408184  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-27 11:27:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1127 11:27:56.909061  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-27 11:27:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1127 11:27:58.911017  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace has status "Ready":"False"
	I1127 11:28:00.911060  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace has status "Ready":"False"
	I1127 11:28:03.410722  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace has status "Ready":"False"
	I1127 11:28:05.910452  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace has status "Ready":"False"
	I1127 11:28:07.911110  119811 pod_ready.go:102] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace has status "Ready":"False"
	I1127 11:28:08.410464  119811 pod_ready.go:92] pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace has status "Ready":"True"
	I1127 11:28:08.410487  119811 pod_ready.go:81] duration metric: took 18.510073321s waiting for pod "coredns-66bff467f8-bcx9z" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.410498  119811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.414544  119811 pod_ready.go:92] pod "etcd-ingress-addon-legacy-123827" in "kube-system" namespace has status "Ready":"True"
	I1127 11:28:08.414569  119811 pod_ready.go:81] duration metric: took 4.06022ms waiting for pod "etcd-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.414580  119811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.418453  119811 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-123827" in "kube-system" namespace has status "Ready":"True"
	I1127 11:28:08.418485  119811 pod_ready.go:81] duration metric: took 3.88167ms waiting for pod "kube-apiserver-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.418495  119811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.422308  119811 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-123827" in "kube-system" namespace has status "Ready":"True"
	I1127 11:28:08.422329  119811 pod_ready.go:81] duration metric: took 3.826682ms waiting for pod "kube-controller-manager-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.422338  119811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gj4xk" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.426107  119811 pod_ready.go:92] pod "kube-proxy-gj4xk" in "kube-system" namespace has status "Ready":"True"
	I1127 11:28:08.426126  119811 pod_ready.go:81] duration metric: took 3.781762ms waiting for pod "kube-proxy-gj4xk" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.426135  119811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.605510  119811 request.go:629] Waited for 179.274336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-123827
	I1127 11:28:08.806376  119811 request.go:629] Waited for 198.349486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-123827
	I1127 11:28:08.809019  119811 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-123827" in "kube-system" namespace has status "Ready":"True"
	I1127 11:28:08.809043  119811 pod_ready.go:81] duration metric: took 382.900604ms waiting for pod "kube-scheduler-ingress-addon-legacy-123827" in "kube-system" namespace to be "Ready" ...
	I1127 11:28:08.809055  119811 pod_ready.go:38] duration metric: took 18.914945867s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:28:08.809073  119811 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:28:08.809134  119811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:28:08.819889  119811 api_server.go:72] duration metric: took 34.526917527s to wait for apiserver process to appear ...
	I1127 11:28:08.819916  119811 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:28:08.819942  119811 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 11:28:08.824804  119811 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 11:28:08.825580  119811 api_server.go:141] control plane version: v1.18.20
	I1127 11:28:08.825603  119811 api_server.go:131] duration metric: took 5.68032ms to wait for apiserver health ...
	I1127 11:28:08.825611  119811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:28:09.005927  119811 request.go:629] Waited for 180.248989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:28:09.011217  119811 system_pods.go:59] 8 kube-system pods found
	I1127 11:28:09.011257  119811 system_pods.go:61] "coredns-66bff467f8-bcx9z" [526ef40d-bf12-464f-8b27-feaca94b3979] Running
	I1127 11:28:09.011265  119811 system_pods.go:61] "etcd-ingress-addon-legacy-123827" [a55c552b-b363-48e6-b5d2-d9d0d848d6c0] Running
	I1127 11:28:09.011271  119811 system_pods.go:61] "kindnet-mp9fx" [c4ee815a-c203-423c-be2f-9131979001f8] Running
	I1127 11:28:09.011277  119811 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-123827" [ea07dfc1-625a-4d9a-83eb-614956714bee] Running
	I1127 11:28:09.011283  119811 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-123827" [88f07d47-545b-4abe-8199-a5298d30af78] Running
	I1127 11:28:09.011289  119811 system_pods.go:61] "kube-proxy-gj4xk" [e02b407a-6ff8-4173-97a3-b4bd83f6d6eb] Running
	I1127 11:28:09.011295  119811 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-123827" [176aaa76-0e4e-45a2-8397-3161c337eb4d] Running
	I1127 11:28:09.011302  119811 system_pods.go:61] "storage-provisioner" [6c573614-d859-4cb1-8e20-e628e4282eac] Running
	I1127 11:28:09.011311  119811 system_pods.go:74] duration metric: took 185.693409ms to wait for pod list to return data ...
	I1127 11:28:09.011329  119811 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:28:09.205804  119811 request.go:629] Waited for 194.362543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 11:28:09.208303  119811 default_sa.go:45] found service account: "default"
	I1127 11:28:09.208336  119811 default_sa.go:55] duration metric: took 196.999374ms for default service account to be created ...
	I1127 11:28:09.208347  119811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:28:09.405846  119811 request.go:629] Waited for 197.412041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:28:09.411281  119811 system_pods.go:86] 8 kube-system pods found
	I1127 11:28:09.411316  119811 system_pods.go:89] "coredns-66bff467f8-bcx9z" [526ef40d-bf12-464f-8b27-feaca94b3979] Running
	I1127 11:28:09.411325  119811 system_pods.go:89] "etcd-ingress-addon-legacy-123827" [a55c552b-b363-48e6-b5d2-d9d0d848d6c0] Running
	I1127 11:28:09.411331  119811 system_pods.go:89] "kindnet-mp9fx" [c4ee815a-c203-423c-be2f-9131979001f8] Running
	I1127 11:28:09.411337  119811 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-123827" [ea07dfc1-625a-4d9a-83eb-614956714bee] Running
	I1127 11:28:09.411343  119811 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-123827" [88f07d47-545b-4abe-8199-a5298d30af78] Running
	I1127 11:28:09.411348  119811 system_pods.go:89] "kube-proxy-gj4xk" [e02b407a-6ff8-4173-97a3-b4bd83f6d6eb] Running
	I1127 11:28:09.411354  119811 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-123827" [176aaa76-0e4e-45a2-8397-3161c337eb4d] Running
	I1127 11:28:09.411359  119811 system_pods.go:89] "storage-provisioner" [6c573614-d859-4cb1-8e20-e628e4282eac] Running
	I1127 11:28:09.411371  119811 system_pods.go:126] duration metric: took 203.015136ms to wait for k8s-apps to be running ...
	I1127 11:28:09.411385  119811 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:28:09.411442  119811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:28:09.424138  119811 system_svc.go:56] duration metric: took 12.738348ms WaitForService to wait for kubelet.
	I1127 11:28:09.424176  119811 kubeadm.go:581] duration metric: took 35.131210334s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:28:09.424202  119811 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:28:09.605548  119811 request.go:629] Waited for 181.258707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1127 11:28:09.608446  119811 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 11:28:09.608483  119811 node_conditions.go:123] node cpu capacity is 8
	I1127 11:28:09.608496  119811 node_conditions.go:105] duration metric: took 184.289956ms to run NodePressure ...
	I1127 11:28:09.608508  119811 start.go:228] waiting for startup goroutines ...
	I1127 11:28:09.608514  119811 start.go:233] waiting for cluster config update ...
	I1127 11:28:09.608528  119811 start.go:242] writing updated cluster config ...
	I1127 11:28:09.608779  119811 ssh_runner.go:195] Run: rm -f paused
	I1127 11:28:09.658479  119811 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1127 11:28:09.660680  119811 out.go:177] 
	W1127 11:28:09.662294  119811 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1127 11:28:09.664042  119811 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1127 11:28:09.665540  119811 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-123827" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 11:30:54 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:30:54.760655937Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-vqnkj/hello-world-app" id=c2a0cb77-ccf1-4fe5-b1a3-98ded58a628e name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Nov 27 11:30:54 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:30:54.760762689Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 11:30:54 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:30:54.869813215Z" level=info msg="Created container b2f4775515fd0654e61873139f8e823af3b0ad9149922210f3ef5824f8444fc9: default/hello-world-app-5f5d8b66bb-vqnkj/hello-world-app" id=c2a0cb77-ccf1-4fe5-b1a3-98ded58a628e name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Nov 27 11:30:54 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:30:54.870364983Z" level=info msg="Starting container: b2f4775515fd0654e61873139f8e823af3b0ad9149922210f3ef5824f8444fc9" id=eb8e51d5-09ac-4ead-b136-15c94883cd6e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Nov 27 11:30:54 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:30:54.879746208Z" level=info msg="Started container" PID=4847 containerID=b2f4775515fd0654e61873139f8e823af3b0ad9149922210f3ef5824f8444fc9 description=default/hello-world-app-5f5d8b66bb-vqnkj/hello-world-app id=eb8e51d5-09ac-4ead-b136-15c94883cd6e name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=923cc67ee8b30f27bb92f574ba53e9d6234548e454818215f7945849743660d6
	Nov 27 11:31:01 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:01.646741583Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=af98dd07-a80d-49a7-bf9e-961514ea3ecf name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 27 11:31:09 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:09.646328971Z" level=info msg="Stopping pod sandbox: e165d20a9eb06f6bd17d0b00ab16a19f203fd61a78fc2b94b86bb9e698049396" id=fe3f7362-1fb5-4758-892d-28c135bb978e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 11:31:09 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:09.647397148Z" level=info msg="Stopped pod sandbox: e165d20a9eb06f6bd17d0b00ab16a19f203fd61a78fc2b94b86bb9e698049396" id=fe3f7362-1fb5-4758-892d-28c135bb978e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 11:31:10 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:10.413888272Z" level=info msg="Stopping container: f6f093608c678de5e96356a4fb74b6086841a538378a6a3f6f065fdc31cf7d59 (timeout: 2s)" id=fcc59f75-8a4c-43f5-9089-3b0501201dd1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 11:31:10 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:10.416772040Z" level=info msg="Stopping container: f6f093608c678de5e96356a4fb74b6086841a538378a6a3f6f065fdc31cf7d59 (timeout: 2s)" id=d3da2395-12b1-4490-9c43-f194535a8453 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.424912113Z" level=warning msg="Stopping container f6f093608c678de5e96356a4fb74b6086841a538378a6a3f6f065fdc31cf7d59 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=fcc59f75-8a4c-43f5-9089-3b0501201dd1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 11:31:12 ingress-addon-legacy-123827 conmon[3399]: conmon f6f093608c678de5e963 <ninfo>: container 3411 exited with status 137
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.587645649Z" level=info msg="Stopped container f6f093608c678de5e96356a4fb74b6086841a538378a6a3f6f065fdc31cf7d59: ingress-nginx/ingress-nginx-controller-7fcf777cb7-gct9t/controller" id=d3da2395-12b1-4490-9c43-f194535a8453 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.587701349Z" level=info msg="Stopped container f6f093608c678de5e96356a4fb74b6086841a538378a6a3f6f065fdc31cf7d59: ingress-nginx/ingress-nginx-controller-7fcf777cb7-gct9t/controller" id=fcc59f75-8a4c-43f5-9089-3b0501201dd1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.588382389Z" level=info msg="Stopping pod sandbox: 48df82f34cd0941f05e130232f2db2a46d8f72bd87e6d855a8a7584f96d068a2" id=cc84193d-72bf-464c-ac30-6299d2b99075 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.588386089Z" level=info msg="Stopping pod sandbox: 48df82f34cd0941f05e130232f2db2a46d8f72bd87e6d855a8a7584f96d068a2" id=722e227e-79ae-406c-a6e6-ba6fcc2cbda2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.591168632Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-SXWRT655SOA3CDDK - [0:0]\n:KUBE-HP-Q7AHNKBHJ4BDHGM3 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-Q7AHNKBHJ4BDHGM3\n-X KUBE-HP-SXWRT655SOA3CDDK\nCOMMIT\n"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.592646332Z" level=info msg="Closing host port tcp:80"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.592685422Z" level=info msg="Closing host port tcp:443"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.593707052Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.593734124Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.593875604Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-gct9t Namespace:ingress-nginx ID:48df82f34cd0941f05e130232f2db2a46d8f72bd87e6d855a8a7584f96d068a2 UID:ec3fe55d-8433-4f70-b2ff-f62f5463c8aa NetNS:/var/run/netns/ee37cec6-5365-417b-bb28-0242dcb488b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.594002229Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-gct9t from CNI network \"kindnet\" (type=ptp)"
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.621189935Z" level=info msg="Stopped pod sandbox: 48df82f34cd0941f05e130232f2db2a46d8f72bd87e6d855a8a7584f96d068a2" id=cc84193d-72bf-464c-ac30-6299d2b99075 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 11:31:12 ingress-addon-legacy-123827 crio[956]: time="2023-11-27 11:31:12.621325390Z" level=info msg="Stopped pod sandbox (already stopped): 48df82f34cd0941f05e130232f2db2a46d8f72bd87e6d855a8a7584f96d068a2" id=722e227e-79ae-406c-a6e6-ba6fcc2cbda2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2f4775515fd0       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   923cc67ee8b30       hello-world-app-5f5d8b66bb-vqnkj
	aedd3b0bee1e7       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   0d775fc5c7d45       nginx
	f6f093608c678       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   48df82f34cd09       ingress-nginx-controller-7fcf777cb7-gct9t
	8b14cb860a68c       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   2a371532f6e00       ingress-nginx-admission-patch-qqfsn
	f0ee7ddc04639       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   26a3f627d5092       ingress-nginx-admission-create-44jvc
	712f2382222bc       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   2b2bc7382bae9       coredns-66bff467f8-bcx9z
	7d628bc8fc13e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   e68fcbb75fe60       storage-provisioner
	ffed92f74b1aa       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   7ff5c61eec9ce       kindnet-mp9fx
	3ec318e9bc6be       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   928760f641e8b       kube-proxy-gj4xk
	ba5d4a7b9b0ed       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   677944b94d673       etcd-ingress-addon-legacy-123827
	b289e78546bd8       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   f8f53d6bc6b24       kube-scheduler-ingress-addon-legacy-123827
	1cbc636c5d1c7       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   915d35f479a67       kube-controller-manager-ingress-addon-legacy-123827
	04d57657be3a0       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   e5ad91e5d64c8       kube-apiserver-ingress-addon-legacy-123827
	
	* 
	* ==> coredns [712f2382222bcd0c6442e4b46942afa6dcc8041a87075faad21cd0459cd6d80c] <==
	* [INFO] 10.244.0.5:54537 - 40572 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004350854s
	[INFO] 10.244.0.5:48985 - 32383 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003899079s
	[INFO] 10.244.0.5:44326 - 35808 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003806503s
	[INFO] 10.244.0.5:54151 - 7931 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003678151s
	[INFO] 10.244.0.5:43568 - 64077 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003806417s
	[INFO] 10.244.0.5:53392 - 43036 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003766465s
	[INFO] 10.244.0.5:34464 - 51369 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00395456s
	[INFO] 10.244.0.5:37760 - 32606 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003874757s
	[INFO] 10.244.0.5:54537 - 61786 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003934629s
	[INFO] 10.244.0.5:44326 - 52656 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004338843s
	[INFO] 10.244.0.5:37760 - 23359 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004114275s
	[INFO] 10.244.0.5:53392 - 65226 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004431778s
	[INFO] 10.244.0.5:34464 - 52946 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004536913s
	[INFO] 10.244.0.5:54151 - 32863 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004586776s
	[INFO] 10.244.0.5:48985 - 52970 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004553573s
	[INFO] 10.244.0.5:43568 - 51163 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004633419s
	[INFO] 10.244.0.5:54537 - 54862 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00455175s
	[INFO] 10.244.0.5:37760 - 64605 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000188124s
	[INFO] 10.244.0.5:48985 - 40408 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062127s
	[INFO] 10.244.0.5:54151 - 29758 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052398s
	[INFO] 10.244.0.5:34464 - 22010 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000222194s
	[INFO] 10.244.0.5:44326 - 26735 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000307783s
	[INFO] 10.244.0.5:43568 - 3023 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000149592s
	[INFO] 10.244.0.5:53392 - 47506 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000363453s
	[INFO] 10.244.0.5:54537 - 29929 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064037s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-123827
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-123827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f
	                    minikube.k8s.io/name=ingress-addon-legacy-123827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T11_27_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 11:27:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-123827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 11:31:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 11:28:59 +0000   Mon, 27 Nov 2023 11:27:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 11:28:59 +0000   Mon, 27 Nov 2023 11:27:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 11:28:59 +0000   Mon, 27 Nov 2023 11:27:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 11:28:59 +0000   Mon, 27 Nov 2023 11:27:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-123827
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a9664a1618f4a0a9572644bb8cb6c9d
	  System UUID:                d05edfce-d7c8-497f-a751-fae09c1ae312
	  Boot ID:                    70e275d9-e289-4a40-9f12-718983944527
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-vqnkj                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-bcx9z                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-123827                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kindnet-mp9fx                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m44s
	  kube-system                 kube-apiserver-ingress-addon-legacy-123827             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-123827    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-gj4xk                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-123827             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-123827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-123827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x3 over 4m7s)  kubelet     Node ingress-addon-legacy-123827 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                kubelet     Node ingress-addon-legacy-123827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                kubelet     Node ingress-addon-legacy-123827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                kubelet     Node ingress-addon-legacy-123827 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                kubelet     Node ingress-addon-legacy-123827 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004916] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006603] FS-Cache: N-cookie d=00000000bad6431e{9p.inode} n=00000000519b9590
	[  +0.008720] FS-Cache: N-key=[8] '4aa20f0200000000'
	[  +0.301934] FS-Cache: Duplicate cookie detected
	[  +0.004681] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006745] FS-Cache: O-cookie d=00000000bad6431e{9p.inode} n=0000000001f430cd
	[  +0.007366] FS-Cache: O-key=[8] '52a20f0200000000'
	[  +0.004934] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006587] FS-Cache: N-cookie d=00000000bad6431e{9p.inode} n=00000000245eaa82
	[  +0.007353] FS-Cache: N-key=[8] '52a20f0200000000'
	[ +22.917859] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov27 11:28] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +1.035585] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +2.011761] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +4.255612] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +8.191137] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[Nov27 11:29] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[ +32.252667] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	
	* 
	* ==> etcd [ba5d4a7b9b0edc7f75ba4380cfec3a3b862f148319c4f8de3ec90d8c0ba72cc1] <==
	* raft2023/11/27 11:27:12 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/27 11:27:12 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 11:27:12.456027 W | auth: simple token is not cryptographically signed
	2023-11-27 11:27:12.459334 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-27 11:27:12.459768 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 11:27:12.460155 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-27 11:27:12.462534 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-27 11:27:12.462709 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-27 11:27:12.462782 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/27 11:27:12 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/27 11:27:12 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-27 11:27:12.852534 I | etcdserver: published {Name:ingress-addon-legacy-123827 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-27 11:27:12.852558 I | embed: ready to serve client requests
	2023-11-27 11:27:12.852672 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-27 11:27:12.852804 I | embed: ready to serve client requests
	2023-11-27 11:27:12.853799 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-27 11:27:12.853967 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-27 11:27:12.855336 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-27 11:27:12.855383 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  11:31:18 up  2:13,  0 users,  load average: 0.13, 0.82, 1.54
	Linux ingress-addon-legacy-123827 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [ffed92f74b1aa61d4dd384767fced35b6ca8551cb15a535532778cc1351fd335] <==
	* I1127 11:29:09.931575       1 main.go:227] handling current node
	I1127 11:29:19.936389       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:29:19.936418       1 main.go:227] handling current node
	I1127 11:29:29.939799       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:29:29.939825       1 main.go:227] handling current node
	I1127 11:29:39.942940       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:29:39.942964       1 main.go:227] handling current node
	I1127 11:29:49.954948       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:29:49.954974       1 main.go:227] handling current node
	I1127 11:29:59.966865       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:29:59.966891       1 main.go:227] handling current node
	I1127 11:30:09.978776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:30:09.978803       1 main.go:227] handling current node
	I1127 11:30:19.986880       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:30:19.986918       1 main.go:227] handling current node
	I1127 11:30:29.999198       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:30:29.999224       1 main.go:227] handling current node
	I1127 11:30:40.003754       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:30:40.003789       1 main.go:227] handling current node
	I1127 11:30:50.007006       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:30:50.007030       1 main.go:227] handling current node
	I1127 11:31:00.011238       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:31:00.011266       1 main.go:227] handling current node
	I1127 11:31:10.022688       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 11:31:10.022711       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [04d57657be3a0960696974bf751660fe8d734461bb80db1f2dbd436e8054a849] <==
	* I1127 11:27:16.138206       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1127 11:27:16.142679       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1127 11:27:16.240338       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 11:27:16.240462       1 cache.go:39] Caches are synced for autoregister controller
	I1127 11:27:16.240864       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1127 11:27:16.241998       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1127 11:27:16.245654       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1127 11:27:17.136420       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1127 11:27:17.136453       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1127 11:27:17.141456       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1127 11:27:17.144232       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1127 11:27:17.144257       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1127 11:27:17.427952       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 11:27:17.464827       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1127 11:27:17.560044       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1127 11:27:17.560922       1 controller.go:609] quota admission added evaluator for: endpoints
	I1127 11:27:17.563885       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 11:27:18.506655       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1127 11:27:19.225349       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1127 11:27:19.416984       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1127 11:27:19.585620       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 11:27:34.041921       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1127 11:27:34.546224       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1127 11:28:10.357282       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1127 11:28:31.434199       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [1cbc636c5d1c7c664ea2e643a200aad4a1154b83238f80d4f7a0d9b2ed5f3ba3] <==
	* I1127 11:27:34.240850       1 shared_informer.go:230] Caches are synced for attach detach 
	I1127 11:27:34.294126       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d2151419-e87f-47e8-8152-4068145ff0ce", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1127 11:27:34.304984       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"ff6c42d2-20df-4778-8d30-28020c137835", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-vcm7d
	I1127 11:27:34.388095       1 shared_informer.go:230] Caches are synced for job 
	I1127 11:27:34.462096       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 11:27:34.469406       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1127 11:27:34.471499       1 shared_informer.go:230] Caches are synced for endpoint 
	I1127 11:27:34.540314       1 shared_informer.go:230] Caches are synced for stateful set 
	I1127 11:27:34.540335       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1127 11:27:34.540554       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 11:27:34.557946       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 11:27:34.557967       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1127 11:27:34.562483       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 11:27:34.564835       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"25ccdaae-534e-4fb2-b3ff-eb7661bd5f53", APIVersion:"apps/v1", ResourceVersion:"250", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-mp9fx
	I1127 11:27:34.564872       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4447f069-6ccf-4549-8070-9b556166d67b", APIVersion:"apps/v1", ResourceVersion:"230", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gj4xk
	I1127 11:27:54.084693       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1127 11:28:10.351244       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"41a13844-f4fb-44da-ad59-3ff909e7971d", APIVersion:"apps/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1127 11:28:10.358393       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"eb0655ed-c69e-4819-b3c1-6b40c5ba2e32", APIVersion:"apps/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-gct9t
	I1127 11:28:10.367976       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e7d92129-205a-4272-896c-52c3eff8ccce", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-44jvc
	I1127 11:28:10.441051       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2d056147-a78f-4e0a-9196-42af7ba9662b", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-qqfsn
	I1127 11:28:14.810684       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e7d92129-205a-4272-896c-52c3eff8ccce", APIVersion:"batch/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 11:28:15.867645       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2d056147-a78f-4e0a-9196-42af7ba9662b", APIVersion:"batch/v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 11:30:52.487716       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"6a1dc29c-e584-419c-bb28-f84a95c57010", APIVersion:"apps/v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1127 11:30:52.495634       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"6af4b28f-7e44-42ed-a1e9-1634dd5103fc", APIVersion:"apps/v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-vqnkj
	E1127 11:31:15.239879       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-gsd48" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [3ec318e9bc6becfc82363be4ece9868f1e6cff7092b15999ace4d185193cef54] <==
	* W1127 11:27:35.166599       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1127 11:27:35.174689       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1127 11:27:35.174716       1 server_others.go:186] Using iptables Proxier.
	I1127 11:27:35.174974       1 server.go:583] Version: v1.18.20
	I1127 11:27:35.175411       1 config.go:315] Starting service config controller
	I1127 11:27:35.175429       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1127 11:27:35.175486       1 config.go:133] Starting endpoints config controller
	I1127 11:27:35.175520       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1127 11:27:35.275604       1 shared_informer.go:230] Caches are synced for service config 
	I1127 11:27:35.275697       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b289e78546bd857130b810893e102d1e58983b08aa4ff5b83a4d6ed099a5af6b] <==
	* W1127 11:27:16.160840       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1127 11:27:16.160869       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1127 11:27:16.250225       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 11:27:16.250260       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 11:27:16.252377       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 11:27:16.252517       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 11:27:16.252788       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1127 11:27:16.252806       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1127 11:27:16.253886       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 11:27:16.254882       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 11:27:16.255680       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:27:16.255759       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 11:27:16.255851       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 11:27:16.256051       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:27:16.256075       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:27:16.255722       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 11:27:16.256115       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:27:16.256150       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 11:27:16.256184       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 11:27:16.256187       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 11:27:17.117833       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 11:27:17.139858       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:27:17.143824       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:27:17.264578       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1127 11:27:17.653348       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 27 11:30:37 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:30:37.647197    1863 pod_workers.go:191] Error syncing pod 73b9d1e3-72fe-440b-9b18-0c462fd13093 ("kube-ingress-dns-minikube_kube-system(73b9d1e3-72fe-440b-9b18-0c462fd13093)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 11:30:49 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:30:49.647288    1863 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 11:30:49 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:30:49.647331    1863 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 11:30:49 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:30:49.647387    1863 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 11:30:49 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:30:49.647423    1863 pod_workers.go:191] Error syncing pod 73b9d1e3-72fe-440b-9b18-0c462fd13093 ("kube-ingress-dns-minikube_kube-system(73b9d1e3-72fe-440b-9b18-0c462fd13093)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 11:30:52 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:30:52.500560    1863 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 27 11:30:52 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:30:52.658510    1863 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-7qrnx" (UniqueName: "kubernetes.io/secret/d108cb7f-07d4-4ab2-b155-73635d1c99f0-default-token-7qrnx") pod "hello-world-app-5f5d8b66bb-vqnkj" (UID: "d108cb7f-07d4-4ab2-b155-73635d1c99f0")
	Nov 27 11:30:52 ingress-addon-legacy-123827 kubelet[1863]: W1127 11:30:52.872639    1863 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/33fe80a0b879241f96eba00cdd066b3b58c755c6ca5940231229c0545359e47b/crio-923cc67ee8b30f27bb92f574ba53e9d6234548e454818215f7945849743660d6 WatchSource:0}: Error finding container 923cc67ee8b30f27bb92f574ba53e9d6234548e454818215f7945849743660d6: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000e11340 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Nov 27 11:31:01 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:31:01.647287    1863 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 11:31:01 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:31:01.647336    1863 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 11:31:01 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:31:01.647396    1863 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 11:31:01 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:31:01.647438    1863 pod_workers.go:191] Error syncing pod 73b9d1e3-72fe-440b-9b18-0c462fd13093 ("kube-ingress-dns-minikube_kube-system(73b9d1e3-72fe-440b-9b18-0c462fd13093)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 11:31:08 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:08.297207    1863 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-wsskr" (UniqueName: "kubernetes.io/secret/73b9d1e3-72fe-440b-9b18-0c462fd13093-minikube-ingress-dns-token-wsskr") pod "73b9d1e3-72fe-440b-9b18-0c462fd13093" (UID: "73b9d1e3-72fe-440b-9b18-0c462fd13093")
	Nov 27 11:31:08 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:08.299249    1863 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b9d1e3-72fe-440b-9b18-0c462fd13093-minikube-ingress-dns-token-wsskr" (OuterVolumeSpecName: "minikube-ingress-dns-token-wsskr") pod "73b9d1e3-72fe-440b-9b18-0c462fd13093" (UID: "73b9d1e3-72fe-440b-9b18-0c462fd13093"). InnerVolumeSpecName "minikube-ingress-dns-token-wsskr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 11:31:08 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:08.397567    1863 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-wsskr" (UniqueName: "kubernetes.io/secret/73b9d1e3-72fe-440b-9b18-0c462fd13093-minikube-ingress-dns-token-wsskr") on node "ingress-addon-legacy-123827" DevicePath ""
	Nov 27 11:31:10 ingress-addon-legacy-123827 kubelet[1863]: W1127 11:31:10.168764    1863 pod_container_deletor.go:77] Container "e165d20a9eb06f6bd17d0b00ab16a19f203fd61a78fc2b94b86bb9e698049396" not found in pod's containers
	Nov 27 11:31:10 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:31:10.417720    1863 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gct9t.179b777eb8db158a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gct9t", UID:"ec3fe55d-8433-4f70-b2ff-f62f5463c8aa", APIVersion:"v1", ResourceVersion:"497", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-123827"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1513d5f98a4a98a, ext:231223834863, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1513d5f98a4a98a, ext:231223834863, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gct9t.179b777eb8db158a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 11:31:10 ingress-addon-legacy-123827 kubelet[1863]: E1127 11:31:10.420739    1863 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gct9t.179b777eb8db158a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gct9t", UID:"ec3fe55d-8433-4f70-b2ff-f62f5463c8aa", APIVersion:"v1", ResourceVersion:"497", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-123827"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1513d5f98a4a98a, ext:231223834863, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1513d5f98cdb57a, ext:231226516869, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gct9t.179b777eb8db158a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 11:31:13 ingress-addon-legacy-123827 kubelet[1863]: W1127 11:31:13.174775    1863 pod_container_deletor.go:77] Container "48df82f34cd0941f05e130232f2db2a46d8f72bd87e6d855a8a7584f96d068a2" not found in pod's containers
	Nov 27 11:31:14 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:14.551480    1863 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7djq6" (UniqueName: "kubernetes.io/secret/ec3fe55d-8433-4f70-b2ff-f62f5463c8aa-ingress-nginx-token-7djq6") pod "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa" (UID: "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa")
	Nov 27 11:31:14 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:14.551547    1863 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ec3fe55d-8433-4f70-b2ff-f62f5463c8aa-webhook-cert") pod "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa" (UID: "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa")
	Nov 27 11:31:14 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:14.553547    1863 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3fe55d-8433-4f70-b2ff-f62f5463c8aa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa" (UID: "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 11:31:14 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:14.553722    1863 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3fe55d-8433-4f70-b2ff-f62f5463c8aa-ingress-nginx-token-7djq6" (OuterVolumeSpecName: "ingress-nginx-token-7djq6") pod "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa" (UID: "ec3fe55d-8433-4f70-b2ff-f62f5463c8aa"). InnerVolumeSpecName "ingress-nginx-token-7djq6". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 11:31:14 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:14.651816    1863 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ec3fe55d-8433-4f70-b2ff-f62f5463c8aa-webhook-cert") on node "ingress-addon-legacy-123827" DevicePath ""
	Nov 27 11:31:14 ingress-addon-legacy-123827 kubelet[1863]: I1127 11:31:14.651854    1863 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7djq6" (UniqueName: "kubernetes.io/secret/ec3fe55d-8433-4f70-b2ff-f62f5463c8aa-ingress-nginx-token-7djq6") on node "ingress-addon-legacy-123827" DevicePath ""
	
	* 
	* ==> storage-provisioner [7d628bc8fc13e94656c37fee24b8518f0502ae0c6827e9fe482a7da82edfc62e] <==
	* I1127 11:27:54.809457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 11:27:54.817086       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 11:27:54.817148       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 11:27:54.822230       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 11:27:54.822373       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-123827_9961c3ec-b7db-455b-b5c3-e9f0fd006fcc!
	I1127 11:27:54.822361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7287d506-b179-4a5c-8632-9f70e177efdc", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-123827_9961c3ec-b7db-455b-b5c3-e9f0fd006fcc became leader
	I1127 11:27:54.923189       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-123827_9961c3ec-b7db-455b-b5c3-e9f0fd006fcc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-123827 -n ingress-addon-legacy-123827
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-123827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-fxkgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-fxkgq -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-fxkgq -- sh -c "ping -c 1 192.168.58.1": exit status 1 (184.809024ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-fxkgq): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-wslrr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-wslrr -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-wslrr -- sh -c "ping -c 1 192.168.58.1": exit status 1 (182.872197ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-wslrr): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-780990
helpers_test.go:235: (dbg) docker inspect multinode-780990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f",
	        "Created": "2023-11-27T11:36:08.624711247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166129,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T11:36:08.902615479Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/hostname",
	        "HostsPath": "/var/lib/docker/containers/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/hosts",
	        "LogPath": "/var/lib/docker/containers/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f-json.log",
	        "Name": "/multinode-780990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-780990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-780990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39be73467bb1f8a002e84803781512280c7ce390f7ecc135476d278972b33a18-init/diff:/var/lib/docker/overlay2/6890504cd609c764c809309abb3d72eb8ac39b0411e6657ccda2a2f23689cb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39be73467bb1f8a002e84803781512280c7ce390f7ecc135476d278972b33a18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39be73467bb1f8a002e84803781512280c7ce390f7ecc135476d278972b33a18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39be73467bb1f8a002e84803781512280c7ce390f7ecc135476d278972b33a18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-780990",
	                "Source": "/var/lib/docker/volumes/multinode-780990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-780990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-780990",
	                "name.minikube.sigs.k8s.io": "multinode-780990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d256c84e69fb106564a00c719bf4b08cf7f73a93df0bc6d4a9b4988d4e78636",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d256c84e69f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-780990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b91bdbce677f",
	                        "multinode-780990"
	                    ],
	                    "NetworkID": "1af6824684458606af834d5b483d5aa1d98c3cb49f26492c1bd4025377ff7bdf",
	                    "EndpointID": "76def20f71709113b6e9c95fc99e77b345ca847f7d4df4ef1656d79c484957e0",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-780990 -n multinode-780990
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-780990 logs -n 25: (1.37574478s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-298161                           | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:35 UTC | 27 Nov 23 11:35 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-298161 ssh -- ls                    | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:35 UTC | 27 Nov 23 11:35 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-283985                           | mount-start-1-283985 | jenkins | v1.32.0 | 27 Nov 23 11:35 UTC | 27 Nov 23 11:35 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-298161 ssh -- ls                    | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:35 UTC | 27 Nov 23 11:35 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-298161                           | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:35 UTC | 27 Nov 23 11:35 UTC |
	| start   | -p mount-start-2-298161                           | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:35 UTC | 27 Nov 23 11:35 UTC |
	| ssh     | mount-start-2-298161 ssh -- ls                    | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:36 UTC | 27 Nov 23 11:36 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-298161                           | mount-start-2-298161 | jenkins | v1.32.0 | 27 Nov 23 11:36 UTC | 27 Nov 23 11:36 UTC |
	| delete  | -p mount-start-1-283985                           | mount-start-1-283985 | jenkins | v1.32.0 | 27 Nov 23 11:36 UTC | 27 Nov 23 11:36 UTC |
	| start   | -p multinode-780990                               | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:36 UTC | 27 Nov 23 11:37 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- apply -f                   | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- rollout                    | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- get pods -o                | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- get pods -o                | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-fxkgq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-wslrr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-fxkgq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-wslrr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-fxkgq -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-wslrr -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- get pods -o                | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:37 UTC | 27 Nov 23 11:37 UTC |
	|         | busybox-5bc68d56bd-fxkgq                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:38 UTC |                     |
	|         | busybox-5bc68d56bd-fxkgq -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:38 UTC | 27 Nov 23 11:38 UTC |
	|         | busybox-5bc68d56bd-wslrr                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-780990 -- exec                       | multinode-780990     | jenkins | v1.32.0 | 27 Nov 23 11:38 UTC |                     |
	|         | busybox-5bc68d56bd-wslrr -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:36:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:36:02.454593  165526 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:36:02.454782  165526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:36:02.454795  165526 out.go:309] Setting ErrFile to fd 2...
	I1127 11:36:02.454803  165526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:36:02.455037  165526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:36:02.455732  165526 out.go:303] Setting JSON to false
	I1127 11:36:02.457139  165526 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8316,"bootTime":1701076647,"procs":694,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:36:02.457214  165526 start.go:138] virtualization: kvm guest
	I1127 11:36:02.460293  165526 out.go:177] * [multinode-780990] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:36:02.462518  165526 notify.go:220] Checking for updates...
	I1127 11:36:02.462527  165526 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:36:02.464367  165526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:36:02.466167  165526 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:36:02.468097  165526 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:36:02.470248  165526 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:36:02.472771  165526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:36:02.474999  165526 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:36:02.500313  165526 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:36:02.500443  165526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:36:02.557935  165526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-27 11:36:02.548524119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:36:02.558040  165526 docker.go:295] overlay module found
	I1127 11:36:02.561635  165526 out.go:177] * Using the docker driver based on user configuration
	I1127 11:36:02.563266  165526 start.go:298] selected driver: docker
	I1127 11:36:02.563291  165526 start.go:902] validating driver "docker" against <nil>
	I1127 11:36:02.563306  165526 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:36:02.564216  165526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:36:02.622667  165526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-27 11:36:02.613361956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:36:02.622881  165526 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 11:36:02.623122  165526 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 11:36:02.625283  165526 out.go:177] * Using Docker driver with root privileges
	I1127 11:36:02.627175  165526 cni.go:84] Creating CNI manager for ""
	I1127 11:36:02.627203  165526 cni.go:136] 0 nodes found, recommending kindnet
	I1127 11:36:02.627216  165526 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 11:36:02.627234  165526 start_flags.go:323] config:
	{Name:multinode-780990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:36:02.629137  165526 out.go:177] * Starting control plane node multinode-780990 in cluster multinode-780990
	I1127 11:36:02.630829  165526 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:36:02.632476  165526 out.go:177] * Pulling base image ...
	I1127 11:36:02.634106  165526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:36:02.634172  165526 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 11:36:02.634189  165526 cache.go:56] Caching tarball of preloaded images
	I1127 11:36:02.634267  165526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:36:02.634317  165526 preload.go:174] Found /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 11:36:02.634338  165526 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 11:36:02.634697  165526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/config.json ...
	I1127 11:36:02.634727  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/config.json: {Name:mk5ff85eb83932405a0b5393280d26c74cebfa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:02.652094  165526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 11:36:02.652139  165526 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 11:36:02.652160  165526 cache.go:194] Successfully downloaded all kic artifacts
	I1127 11:36:02.652207  165526 start.go:365] acquiring machines lock for multinode-780990: {Name:mkefe64e962078ef1faeb2be56ee2f4f5481c71d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:36:02.652319  165526 start.go:369] acquired machines lock for "multinode-780990" in 94.081µs
	I1127 11:36:02.652345  165526 start.go:93] Provisioning new machine with config: &{Name:multinode-780990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 11:36:02.652429  165526 start.go:125] createHost starting for "" (driver="docker")
	I1127 11:36:02.654720  165526 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1127 11:36:02.654979  165526 start.go:159] libmachine.API.Create for "multinode-780990" (driver="docker")
	I1127 11:36:02.655013  165526 client.go:168] LocalClient.Create starting
	I1127 11:36:02.655099  165526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem
	I1127 11:36:02.655133  165526 main.go:141] libmachine: Decoding PEM data...
	I1127 11:36:02.655146  165526 main.go:141] libmachine: Parsing certificate...
	I1127 11:36:02.655216  165526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem
	I1127 11:36:02.655235  165526 main.go:141] libmachine: Decoding PEM data...
	I1127 11:36:02.655247  165526 main.go:141] libmachine: Parsing certificate...
	I1127 11:36:02.655552  165526 cli_runner.go:164] Run: docker network inspect multinode-780990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 11:36:02.672903  165526 cli_runner.go:211] docker network inspect multinode-780990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 11:36:02.672985  165526 network_create.go:281] running [docker network inspect multinode-780990] to gather additional debugging logs...
	I1127 11:36:02.673005  165526 cli_runner.go:164] Run: docker network inspect multinode-780990
	W1127 11:36:02.689726  165526 cli_runner.go:211] docker network inspect multinode-780990 returned with exit code 1
	I1127 11:36:02.689759  165526 network_create.go:284] error running [docker network inspect multinode-780990]: docker network inspect multinode-780990: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-780990 not found
	I1127 11:36:02.689771  165526 network_create.go:286] output of [docker network inspect multinode-780990]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-780990 not found
	
	** /stderr **
	I1127 11:36:02.689869  165526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:36:02.707392  165526 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7f94acb005f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:39:f5:41:cd} reservation:<nil>}
	I1127 11:36:02.707915  165526 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002604520}
	I1127 11:36:02.707944  165526 network_create.go:124] attempt to create docker network multinode-780990 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1127 11:36:02.708004  165526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-780990 multinode-780990
	I1127 11:36:02.764541  165526 network_create.go:108] docker network multinode-780990 192.168.58.0/24 created
	I1127 11:36:02.764585  165526 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-780990" container
	I1127 11:36:02.764666  165526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 11:36:02.781554  165526 cli_runner.go:164] Run: docker volume create multinode-780990 --label name.minikube.sigs.k8s.io=multinode-780990 --label created_by.minikube.sigs.k8s.io=true
	I1127 11:36:02.800680  165526 oci.go:103] Successfully created a docker volume multinode-780990
	I1127 11:36:02.800780  165526 cli_runner.go:164] Run: docker run --rm --name multinode-780990-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-780990 --entrypoint /usr/bin/test -v multinode-780990:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 11:36:03.344035  165526 oci.go:107] Successfully prepared a docker volume multinode-780990
	I1127 11:36:03.344088  165526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:36:03.344113  165526 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 11:36:03.344214  165526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-780990:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 11:36:08.559427  165526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-780990:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.215162851s)
	I1127 11:36:08.559461  165526 kic.go:203] duration metric: took 5.215345 seconds to extract preloaded images to volume
	W1127 11:36:08.559717  165526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 11:36:08.559877  165526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 11:36:08.610411  165526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-780990 --name multinode-780990 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-780990 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-780990 --network multinode-780990 --ip 192.168.58.2 --volume multinode-780990:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 11:36:08.910586  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Running}}
	I1127 11:36:08.927839  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:36:08.944513  165526 cli_runner.go:164] Run: docker exec multinode-780990 stat /var/lib/dpkg/alternatives/iptables
	I1127 11:36:08.983525  165526 oci.go:144] the created container "multinode-780990" has a running status.
	I1127 11:36:08.983555  165526 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa...
	I1127 11:36:09.294998  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 11:36:09.295043  165526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 11:36:09.314026  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:36:09.331079  165526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 11:36:09.331106  165526 kic_runner.go:114] Args: [docker exec --privileged multinode-780990 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 11:36:09.397229  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:36:09.416108  165526 machine.go:88] provisioning docker machine ...
	I1127 11:36:09.416152  165526 ubuntu.go:169] provisioning hostname "multinode-780990"
	I1127 11:36:09.416228  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:09.432944  165526 main.go:141] libmachine: Using SSH client type: native
	I1127 11:36:09.433367  165526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1127 11:36:09.433384  165526 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-780990 && echo "multinode-780990" | sudo tee /etc/hostname
	I1127 11:36:09.566950  165526 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-780990
	
	I1127 11:36:09.567037  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:09.584995  165526 main.go:141] libmachine: Using SSH client type: native
	I1127 11:36:09.585338  165526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1127 11:36:09.585364  165526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-780990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-780990/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-780990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:36:09.707856  165526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:36:09.707887  165526 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17644-72381/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-72381/.minikube}
	I1127 11:36:09.707905  165526 ubuntu.go:177] setting up certificates
	I1127 11:36:09.707922  165526 provision.go:83] configureAuth start
	I1127 11:36:09.707971  165526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990
	I1127 11:36:09.724322  165526 provision.go:138] copyHostCerts
	I1127 11:36:09.724372  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:36:09.724404  165526 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem, removing ...
	I1127 11:36:09.724414  165526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:36:09.724489  165526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem (1082 bytes)
	I1127 11:36:09.724601  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:36:09.724631  165526 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem, removing ...
	I1127 11:36:09.724641  165526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:36:09.724687  165526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem (1123 bytes)
	I1127 11:36:09.724769  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:36:09.724793  165526 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem, removing ...
	I1127 11:36:09.724800  165526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:36:09.724839  165526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem (1675 bytes)
	I1127 11:36:09.724920  165526 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem org=jenkins.multinode-780990 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-780990]
	I1127 11:36:10.045437  165526 provision.go:172] copyRemoteCerts
	I1127 11:36:10.045508  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:36:10.045548  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:10.061982  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:10.151688  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 11:36:10.151777  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1127 11:36:10.172649  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 11:36:10.172709  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1127 11:36:10.193603  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 11:36:10.193676  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 11:36:10.214600  165526 provision.go:86] duration metric: configureAuth took 506.663145ms
	I1127 11:36:10.214629  165526 ubuntu.go:193] setting minikube options for container-runtime
	I1127 11:36:10.214820  165526 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:36:10.214938  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:10.231734  165526 main.go:141] libmachine: Using SSH client type: native
	I1127 11:36:10.232120  165526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1127 11:36:10.232149  165526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 11:36:10.436274  165526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 11:36:10.436301  165526 machine.go:91] provisioned docker machine in 1.020164858s
	I1127 11:36:10.436324  165526 client.go:171] LocalClient.Create took 7.781294183s
	I1127 11:36:10.436361  165526 start.go:167] duration metric: libmachine.API.Create for "multinode-780990" took 7.781381928s
	I1127 11:36:10.436372  165526 start.go:300] post-start starting for "multinode-780990" (driver="docker")
	I1127 11:36:10.436380  165526 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:36:10.436444  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:36:10.436481  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:10.452854  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:10.540261  165526 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:36:10.543636  165526 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1127 11:36:10.543679  165526 command_runner.go:130] > NAME="Ubuntu"
	I1127 11:36:10.543690  165526 command_runner.go:130] > VERSION_ID="22.04"
	I1127 11:36:10.543701  165526 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1127 11:36:10.543709  165526 command_runner.go:130] > VERSION_CODENAME=jammy
	I1127 11:36:10.543723  165526 command_runner.go:130] > ID=ubuntu
	I1127 11:36:10.543727  165526 command_runner.go:130] > ID_LIKE=debian
	I1127 11:36:10.543731  165526 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1127 11:36:10.543738  165526 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1127 11:36:10.543744  165526 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1127 11:36:10.543752  165526 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1127 11:36:10.543756  165526 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1127 11:36:10.543802  165526 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 11:36:10.543824  165526 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 11:36:10.543832  165526 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 11:36:10.543841  165526 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 11:36:10.543851  165526 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/addons for local assets ...
	I1127 11:36:10.543905  165526 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/files for local assets ...
	I1127 11:36:10.543985  165526 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
	I1127 11:36:10.543999  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> /etc/ssl/certs/791532.pem
	I1127 11:36:10.544080  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:36:10.551806  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:36:10.572838  165526 start.go:303] post-start completed in 136.45305ms
	I1127 11:36:10.573183  165526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990
	I1127 11:36:10.589355  165526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/config.json ...
	I1127 11:36:10.589604  165526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:36:10.589657  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:10.606754  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:10.692454  165526 command_runner.go:130] > 31%!
	(MISSING)I1127 11:36:10.692518  165526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 11:36:10.696426  165526 command_runner.go:130] > 204G
	I1127 11:36:10.696689  165526 start.go:128] duration metric: createHost completed in 8.044236745s
	I1127 11:36:10.696716  165526 start.go:83] releasing machines lock for "multinode-780990", held for 8.044381654s
	I1127 11:36:10.696781  165526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990
	I1127 11:36:10.712574  165526 ssh_runner.go:195] Run: cat /version.json
	I1127 11:36:10.712624  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:10.712683  165526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:36:10.712762  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:10.729175  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:10.729780  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:10.815454  165526 command_runner.go:130] > {"iso_version": "v1.32.1-1699648094-17581", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "6532cab52e164d1138ecb8469e77a57a00b45825"}
	I1127 11:36:10.900712  165526 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 11:36:10.902868  165526 ssh_runner.go:195] Run: systemctl --version
	I1127 11:36:10.907143  165526 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1127 11:36:10.907183  165526 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1127 11:36:10.907242  165526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 11:36:11.043445  165526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 11:36:11.047386  165526 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1127 11:36:11.047450  165526 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1127 11:36:11.047467  165526 command_runner.go:130] > Device: 33h/51d	Inode: 533119      Links: 1
	I1127 11:36:11.047482  165526 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 11:36:11.047496  165526 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1127 11:36:11.047508  165526 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1127 11:36:11.047521  165526 command_runner.go:130] > Change: 2023-11-27 11:17:12.627806055 +0000
	I1127 11:36:11.047533  165526 command_runner.go:130] >  Birth: 2023-11-27 11:17:12.627806055 +0000
	I1127 11:36:11.047638  165526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:36:11.064793  165526 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 11:36:11.064874  165526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:36:11.091355  165526 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1127 11:36:11.091404  165526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 11:36:11.091413  165526 start.go:472] detecting cgroup driver to use...
	I1127 11:36:11.091449  165526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 11:36:11.091508  165526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:36:11.105261  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:36:11.114920  165526 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:36:11.114974  165526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:36:11.126805  165526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:36:11.139906  165526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 11:36:11.221376  165526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:36:11.234207  165526 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 11:36:11.297032  165526 docker.go:219] disabling docker service ...
	I1127 11:36:11.297124  165526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:36:11.313749  165526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:36:11.323812  165526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:36:11.397518  165526 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 11:36:11.397600  165526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:36:11.408178  165526 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 11:36:11.479278  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:36:11.489947  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:36:11.504110  165526 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 11:36:11.505106  165526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 11:36:11.505166  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:36:11.514465  165526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 11:36:11.514530  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:36:11.523784  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:36:11.532754  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:36:11.541760  165526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:36:11.550199  165526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:36:11.557453  165526 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1127 11:36:11.558089  165526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:36:11.565728  165526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:36:11.636914  165526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 11:36:11.730043  165526 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 11:36:11.730107  165526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 11:36:11.733428  165526 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 11:36:11.733452  165526 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 11:36:11.733459  165526 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1127 11:36:11.733466  165526 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 11:36:11.733471  165526 command_runner.go:130] > Access: 2023-11-27 11:36:11.716185750 +0000
	I1127 11:36:11.733476  165526 command_runner.go:130] > Modify: 2023-11-27 11:36:11.716185750 +0000
	I1127 11:36:11.733490  165526 command_runner.go:130] > Change: 2023-11-27 11:36:11.716185750 +0000
	I1127 11:36:11.733494  165526 command_runner.go:130] >  Birth: -
	I1127 11:36:11.733512  165526 start.go:540] Will wait 60s for crictl version
	I1127 11:36:11.733558  165526 ssh_runner.go:195] Run: which crictl
	I1127 11:36:11.736632  165526 command_runner.go:130] > /usr/bin/crictl
	I1127 11:36:11.736696  165526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 11:36:11.765552  165526 command_runner.go:130] > Version:  0.1.0
	I1127 11:36:11.765580  165526 command_runner.go:130] > RuntimeName:  cri-o
	I1127 11:36:11.765589  165526 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1127 11:36:11.765598  165526 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 11:36:11.767403  165526 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 11:36:11.767484  165526 ssh_runner.go:195] Run: crio --version
	I1127 11:36:11.799480  165526 command_runner.go:130] > crio version 1.24.6
	I1127 11:36:11.799501  165526 command_runner.go:130] > Version:          1.24.6
	I1127 11:36:11.799508  165526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 11:36:11.799512  165526 command_runner.go:130] > GitTreeState:     clean
	I1127 11:36:11.799524  165526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 11:36:11.799531  165526 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 11:36:11.799537  165526 command_runner.go:130] > Compiler:         gc
	I1127 11:36:11.799546  165526 command_runner.go:130] > Platform:         linux/amd64
	I1127 11:36:11.799554  165526 command_runner.go:130] > Linkmode:         dynamic
	I1127 11:36:11.799571  165526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 11:36:11.799582  165526 command_runner.go:130] > SeccompEnabled:   true
	I1127 11:36:11.799589  165526 command_runner.go:130] > AppArmorEnabled:  false
	I1127 11:36:11.801053  165526 ssh_runner.go:195] Run: crio --version
	I1127 11:36:11.833800  165526 command_runner.go:130] > crio version 1.24.6
	I1127 11:36:11.833824  165526 command_runner.go:130] > Version:          1.24.6
	I1127 11:36:11.833831  165526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 11:36:11.833836  165526 command_runner.go:130] > GitTreeState:     clean
	I1127 11:36:11.833842  165526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 11:36:11.833855  165526 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 11:36:11.833862  165526 command_runner.go:130] > Compiler:         gc
	I1127 11:36:11.833869  165526 command_runner.go:130] > Platform:         linux/amd64
	I1127 11:36:11.833881  165526 command_runner.go:130] > Linkmode:         dynamic
	I1127 11:36:11.833895  165526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 11:36:11.833902  165526 command_runner.go:130] > SeccompEnabled:   true
	I1127 11:36:11.833906  165526 command_runner.go:130] > AppArmorEnabled:  false
	I1127 11:36:11.836033  165526 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 11:36:11.837691  165526 cli_runner.go:164] Run: docker network inspect multinode-780990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:36:11.853557  165526 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1127 11:36:11.857187  165526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:36:11.867280  165526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:36:11.867334  165526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:36:11.917323  165526 command_runner.go:130] > {
	I1127 11:36:11.917345  165526 command_runner.go:130] >   "images": [
	I1127 11:36:11.917349  165526 command_runner.go:130] >     {
	I1127 11:36:11.917357  165526 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1127 11:36:11.917362  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917367  165526 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 11:36:11.917371  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917376  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917390  165526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 11:36:11.917398  165526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1127 11:36:11.917402  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917406  165526 command_runner.go:130] >       "size": "65258016",
	I1127 11:36:11.917411  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.917415  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917422  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917428  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.917433  165526 command_runner.go:130] >     },
	I1127 11:36:11.917439  165526 command_runner.go:130] >     {
	I1127 11:36:11.917445  165526 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1127 11:36:11.917451  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917457  165526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 11:36:11.917463  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917467  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917477  165526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1127 11:36:11.917485  165526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1127 11:36:11.917491  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917501  165526 command_runner.go:130] >       "size": "31470524",
	I1127 11:36:11.917507  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.917511  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917518  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917522  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.917527  165526 command_runner.go:130] >     },
	I1127 11:36:11.917530  165526 command_runner.go:130] >     {
	I1127 11:36:11.917536  165526 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1127 11:36:11.917543  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917548  165526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 11:36:11.917559  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917563  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917573  165526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1127 11:36:11.917581  165526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1127 11:36:11.917587  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917591  165526 command_runner.go:130] >       "size": "53621675",
	I1127 11:36:11.917597  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.917601  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917607  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917614  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.917617  165526 command_runner.go:130] >     },
	I1127 11:36:11.917623  165526 command_runner.go:130] >     {
	I1127 11:36:11.917629  165526 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1127 11:36:11.917636  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917642  165526 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 11:36:11.917648  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917652  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917661  165526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1127 11:36:11.917668  165526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1127 11:36:11.917680  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917688  165526 command_runner.go:130] >       "size": "295456551",
	I1127 11:36:11.917692  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.917696  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.917700  165526 command_runner.go:130] >       },
	I1127 11:36:11.917705  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917709  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917718  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.917722  165526 command_runner.go:130] >     },
	I1127 11:36:11.917726  165526 command_runner.go:130] >     {
	I1127 11:36:11.917732  165526 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1127 11:36:11.917738  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917744  165526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 11:36:11.917749  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917754  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917760  165526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1127 11:36:11.917770  165526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1127 11:36:11.917773  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917778  165526 command_runner.go:130] >       "size": "127226832",
	I1127 11:36:11.917784  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.917788  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.917792  165526 command_runner.go:130] >       },
	I1127 11:36:11.917796  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917800  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917807  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.917819  165526 command_runner.go:130] >     },
	I1127 11:36:11.917825  165526 command_runner.go:130] >     {
	I1127 11:36:11.917833  165526 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1127 11:36:11.917838  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917843  165526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 11:36:11.917849  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917854  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917862  165526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 11:36:11.917872  165526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1127 11:36:11.917878  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917882  165526 command_runner.go:130] >       "size": "123261750",
	I1127 11:36:11.917888  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.917892  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.917895  165526 command_runner.go:130] >       },
	I1127 11:36:11.917900  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917905  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917909  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.917915  165526 command_runner.go:130] >     },
	I1127 11:36:11.917921  165526 command_runner.go:130] >     {
	I1127 11:36:11.917930  165526 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1127 11:36:11.917937  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.917942  165526 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 11:36:11.917948  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917952  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.917960  165526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1127 11:36:11.917969  165526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 11:36:11.917974  165526 command_runner.go:130] >       ],
	I1127 11:36:11.917978  165526 command_runner.go:130] >       "size": "74749335",
	I1127 11:36:11.917984  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.917989  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.917993  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.917997  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.918002  165526 command_runner.go:130] >     },
	I1127 11:36:11.918006  165526 command_runner.go:130] >     {
	I1127 11:36:11.918013  165526 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1127 11:36:11.918018  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.918026  165526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 11:36:11.918031  165526 command_runner.go:130] >       ],
	I1127 11:36:11.918035  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.918076  165526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 11:36:11.918089  165526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1127 11:36:11.918093  165526 command_runner.go:130] >       ],
	I1127 11:36:11.918097  165526 command_runner.go:130] >       "size": "61551410",
	I1127 11:36:11.918101  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.918107  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.918111  165526 command_runner.go:130] >       },
	I1127 11:36:11.918117  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.918121  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.918126  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.918129  165526 command_runner.go:130] >     },
	I1127 11:36:11.918135  165526 command_runner.go:130] >     {
	I1127 11:36:11.918141  165526 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1127 11:36:11.918146  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.918153  165526 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 11:36:11.918159  165526 command_runner.go:130] >       ],
	I1127 11:36:11.918168  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.918175  165526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1127 11:36:11.918181  165526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1127 11:36:11.918186  165526 command_runner.go:130] >       ],
	I1127 11:36:11.918190  165526 command_runner.go:130] >       "size": "750414",
	I1127 11:36:11.918193  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.918198  165526 command_runner.go:130] >         "value": "65535"
	I1127 11:36:11.918201  165526 command_runner.go:130] >       },
	I1127 11:36:11.918206  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.918211  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.918217  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.918222  165526 command_runner.go:130] >     }
	I1127 11:36:11.918231  165526 command_runner.go:130] >   ]
	I1127 11:36:11.918236  165526 command_runner.go:130] > }
	I1127 11:36:11.920021  165526 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 11:36:11.920042  165526 crio.go:415] Images already preloaded, skipping extraction
	I1127 11:36:11.920101  165526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 11:36:11.949635  165526 command_runner.go:130] > {
	I1127 11:36:11.949655  165526 command_runner.go:130] >   "images": [
	I1127 11:36:11.949659  165526 command_runner.go:130] >     {
	I1127 11:36:11.949671  165526 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1127 11:36:11.949676  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.949683  165526 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 11:36:11.949686  165526 command_runner.go:130] >       ],
	I1127 11:36:11.949690  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.949705  165526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 11:36:11.949717  165526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1127 11:36:11.949731  165526 command_runner.go:130] >       ],
	I1127 11:36:11.949738  165526 command_runner.go:130] >       "size": "65258016",
	I1127 11:36:11.949745  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.949755  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.949764  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.949774  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.949778  165526 command_runner.go:130] >     },
	I1127 11:36:11.949784  165526 command_runner.go:130] >     {
	I1127 11:36:11.949794  165526 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1127 11:36:11.949804  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.949814  165526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 11:36:11.949820  165526 command_runner.go:130] >       ],
	I1127 11:36:11.949827  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.949840  165526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1127 11:36:11.949853  165526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1127 11:36:11.949862  165526 command_runner.go:130] >       ],
	I1127 11:36:11.949879  165526 command_runner.go:130] >       "size": "31470524",
	I1127 11:36:11.949889  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.949897  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.949907  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.949918  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.949927  165526 command_runner.go:130] >     },
	I1127 11:36:11.949936  165526 command_runner.go:130] >     {
	I1127 11:36:11.949949  165526 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1127 11:36:11.949959  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.949966  165526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 11:36:11.949987  165526 command_runner.go:130] >       ],
	I1127 11:36:11.949998  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950014  165526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1127 11:36:11.950029  165526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1127 11:36:11.950039  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950047  165526 command_runner.go:130] >       "size": "53621675",
	I1127 11:36:11.950053  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.950063  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950070  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950086  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950096  165526 command_runner.go:130] >     },
	I1127 11:36:11.950105  165526 command_runner.go:130] >     {
	I1127 11:36:11.950121  165526 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1127 11:36:11.950130  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.950138  165526 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 11:36:11.950147  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950155  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950170  165526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1127 11:36:11.950188  165526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1127 11:36:11.950206  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950215  165526 command_runner.go:130] >       "size": "295456551",
	I1127 11:36:11.950222  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.950228  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.950238  165526 command_runner.go:130] >       },
	I1127 11:36:11.950248  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950259  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950269  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950278  165526 command_runner.go:130] >     },
	I1127 11:36:11.950287  165526 command_runner.go:130] >     {
	I1127 11:36:11.950302  165526 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1127 11:36:11.950310  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.950318  165526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 11:36:11.950327  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950338  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950353  165526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1127 11:36:11.950365  165526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1127 11:36:11.950378  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950393  165526 command_runner.go:130] >       "size": "127226832",
	I1127 11:36:11.950403  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.950409  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.950417  165526 command_runner.go:130] >       },
	I1127 11:36:11.950426  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950436  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950446  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950455  165526 command_runner.go:130] >     },
	I1127 11:36:11.950464  165526 command_runner.go:130] >     {
	I1127 11:36:11.950477  165526 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1127 11:36:11.950488  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.950496  165526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 11:36:11.950502  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950506  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950516  165526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 11:36:11.950526  165526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1127 11:36:11.950531  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950539  165526 command_runner.go:130] >       "size": "123261750",
	I1127 11:36:11.950545  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.950553  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.950559  165526 command_runner.go:130] >       },
	I1127 11:36:11.950564  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950570  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950574  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950580  165526 command_runner.go:130] >     },
	I1127 11:36:11.950584  165526 command_runner.go:130] >     {
	I1127 11:36:11.950592  165526 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1127 11:36:11.950598  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.950603  165526 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 11:36:11.950609  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950614  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950621  165526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1127 11:36:11.950630  165526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 11:36:11.950636  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950640  165526 command_runner.go:130] >       "size": "74749335",
	I1127 11:36:11.950650  165526 command_runner.go:130] >       "uid": null,
	I1127 11:36:11.950657  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950661  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950667  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950671  165526 command_runner.go:130] >     },
	I1127 11:36:11.950677  165526 command_runner.go:130] >     {
	I1127 11:36:11.950683  165526 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1127 11:36:11.950689  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.950694  165526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 11:36:11.950700  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950704  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950754  165526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 11:36:11.950765  165526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1127 11:36:11.950768  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950772  165526 command_runner.go:130] >       "size": "61551410",
	I1127 11:36:11.950776  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.950780  165526 command_runner.go:130] >         "value": "0"
	I1127 11:36:11.950786  165526 command_runner.go:130] >       },
	I1127 11:36:11.950792  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950799  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950803  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950808  165526 command_runner.go:130] >     },
	I1127 11:36:11.950812  165526 command_runner.go:130] >     {
	I1127 11:36:11.950820  165526 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1127 11:36:11.950824  165526 command_runner.go:130] >       "repoTags": [
	I1127 11:36:11.950830  165526 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 11:36:11.950834  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950838  165526 command_runner.go:130] >       "repoDigests": [
	I1127 11:36:11.950846  165526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1127 11:36:11.950855  165526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1127 11:36:11.950859  165526 command_runner.go:130] >       ],
	I1127 11:36:11.950865  165526 command_runner.go:130] >       "size": "750414",
	I1127 11:36:11.950869  165526 command_runner.go:130] >       "uid": {
	I1127 11:36:11.950876  165526 command_runner.go:130] >         "value": "65535"
	I1127 11:36:11.950879  165526 command_runner.go:130] >       },
	I1127 11:36:11.950884  165526 command_runner.go:130] >       "username": "",
	I1127 11:36:11.950908  165526 command_runner.go:130] >       "spec": null,
	I1127 11:36:11.950926  165526 command_runner.go:130] >       "pinned": false
	I1127 11:36:11.950936  165526 command_runner.go:130] >     }
	I1127 11:36:11.950945  165526 command_runner.go:130] >   ]
	I1127 11:36:11.950954  165526 command_runner.go:130] > }
	I1127 11:36:11.951993  165526 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 11:36:11.952014  165526 cache_images.go:84] Images are preloaded, skipping loading
	I1127 11:36:11.952069  165526 ssh_runner.go:195] Run: crio config
	I1127 11:36:11.988944  165526 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 11:36:11.988971  165526 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 11:36:11.988978  165526 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 11:36:11.988982  165526 command_runner.go:130] > #
	I1127 11:36:11.988990  165526 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 11:36:11.988999  165526 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 11:36:11.989016  165526 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 11:36:11.989038  165526 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 11:36:11.989051  165526 command_runner.go:130] > # reload'.
	I1127 11:36:11.989062  165526 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 11:36:11.989076  165526 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 11:36:11.989090  165526 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 11:36:11.989103  165526 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 11:36:11.989123  165526 command_runner.go:130] > [crio]
	I1127 11:36:11.989134  165526 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 11:36:11.989147  165526 command_runner.go:130] > # containers images, in this directory.
	I1127 11:36:11.989168  165526 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1127 11:36:11.989190  165526 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 11:36:11.989202  165526 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1127 11:36:11.989213  165526 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 11:36:11.989226  165526 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 11:36:11.989238  165526 command_runner.go:130] > # storage_driver = "vfs"
	I1127 11:36:11.989247  165526 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 11:36:11.989264  165526 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 11:36:11.989275  165526 command_runner.go:130] > # storage_option = [
	I1127 11:36:11.989280  165526 command_runner.go:130] > # ]
	I1127 11:36:11.989291  165526 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 11:36:11.989304  165526 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 11:36:11.989316  165526 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 11:36:11.989330  165526 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 11:36:11.989344  165526 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 11:36:11.989355  165526 command_runner.go:130] > # always happen on a node reboot
	I1127 11:36:11.989368  165526 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 11:36:11.989381  165526 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 11:36:11.989393  165526 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 11:36:11.989417  165526 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 11:36:11.989440  165526 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 11:36:11.989455  165526 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 11:36:11.989471  165526 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 11:36:11.989481  165526 command_runner.go:130] > # internal_wipe = true
	I1127 11:36:11.989493  165526 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 11:36:11.989505  165526 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 11:36:11.989518  165526 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 11:36:11.989529  165526 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 11:36:11.989543  165526 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 11:36:11.989552  165526 command_runner.go:130] > [crio.api]
	I1127 11:36:11.989561  165526 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 11:36:11.989571  165526 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 11:36:11.989587  165526 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 11:36:11.989606  165526 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 11:36:11.989621  165526 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 11:36:11.989632  165526 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 11:36:11.989642  165526 command_runner.go:130] > # stream_port = "0"
	I1127 11:36:11.989660  165526 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 11:36:11.989671  165526 command_runner.go:130] > # stream_enable_tls = false
	I1127 11:36:11.989681  165526 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 11:36:11.989689  165526 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 11:36:11.989702  165526 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 11:36:11.989714  165526 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 11:36:11.989721  165526 command_runner.go:130] > # minutes.
	I1127 11:36:11.989733  165526 command_runner.go:130] > # stream_tls_cert = ""
	I1127 11:36:11.989743  165526 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 11:36:11.989756  165526 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 11:36:11.989766  165526 command_runner.go:130] > # stream_tls_key = ""
	I1127 11:36:11.989776  165526 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 11:36:11.989790  165526 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 11:36:11.989802  165526 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 11:36:11.989811  165526 command_runner.go:130] > # stream_tls_ca = ""
	I1127 11:36:11.989823  165526 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 11:36:11.989833  165526 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1127 11:36:11.989844  165526 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 11:36:11.989861  165526 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1127 11:36:11.989946  165526 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 11:36:11.989962  165526 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 11:36:11.989969  165526 command_runner.go:130] > [crio.runtime]
	I1127 11:36:11.989982  165526 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 11:36:11.989995  165526 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 11:36:11.990002  165526 command_runner.go:130] > # "nofile=1024:2048"
	I1127 11:36:11.990012  165526 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 11:36:11.990022  165526 command_runner.go:130] > # default_ulimits = [
	I1127 11:36:11.990033  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990043  165526 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 11:36:11.990055  165526 command_runner.go:130] > # no_pivot = false
	I1127 11:36:11.990065  165526 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 11:36:11.990074  165526 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 11:36:11.990081  165526 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 11:36:11.990090  165526 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 11:36:11.990096  165526 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 11:36:11.990111  165526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 11:36:11.990120  165526 command_runner.go:130] > # conmon = ""
	I1127 11:36:11.990126  165526 command_runner.go:130] > # Cgroup setting for conmon
	I1127 11:36:11.990134  165526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 11:36:11.990140  165526 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 11:36:11.990147  165526 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 11:36:11.990154  165526 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 11:36:11.990165  165526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 11:36:11.990170  165526 command_runner.go:130] > # conmon_env = [
	I1127 11:36:11.990174  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990181  165526 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 11:36:11.990187  165526 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 11:36:11.990194  165526 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 11:36:11.990199  165526 command_runner.go:130] > # default_env = [
	I1127 11:36:11.990203  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990210  165526 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 11:36:11.990215  165526 command_runner.go:130] > # selinux = false
	I1127 11:36:11.990225  165526 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 11:36:11.990233  165526 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 11:36:11.990245  165526 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 11:36:11.990251  165526 command_runner.go:130] > # seccomp_profile = ""
	I1127 11:36:11.990259  165526 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 11:36:11.990267  165526 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 11:36:11.990277  165526 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 11:36:11.990285  165526 command_runner.go:130] > # which might increase security.
	I1127 11:36:11.990293  165526 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1127 11:36:11.990303  165526 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 11:36:11.990313  165526 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 11:36:11.990323  165526 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 11:36:11.990332  165526 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 11:36:11.990338  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:36:11.990343  165526 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 11:36:11.990351  165526 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 11:36:11.990355  165526 command_runner.go:130] > # the cgroup blockio controller.
	I1127 11:36:11.990359  165526 command_runner.go:130] > # blockio_config_file = ""
	I1127 11:36:11.990365  165526 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 11:36:11.990369  165526 command_runner.go:130] > # irqbalance daemon.
	I1127 11:36:11.990377  165526 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 11:36:11.990385  165526 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 11:36:11.990390  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:36:11.990394  165526 command_runner.go:130] > # rdt_config_file = ""
	I1127 11:36:11.990399  165526 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 11:36:11.990403  165526 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 11:36:11.990411  165526 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 11:36:11.990415  165526 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 11:36:11.990421  165526 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 11:36:11.990432  165526 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 11:36:11.990436  165526 command_runner.go:130] > # will be added.
	I1127 11:36:11.990440  165526 command_runner.go:130] > # default_capabilities = [
	I1127 11:36:11.990444  165526 command_runner.go:130] > # 	"CHOWN",
	I1127 11:36:11.990447  165526 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 11:36:11.990451  165526 command_runner.go:130] > # 	"FSETID",
	I1127 11:36:11.990454  165526 command_runner.go:130] > # 	"FOWNER",
	I1127 11:36:11.990458  165526 command_runner.go:130] > # 	"SETGID",
	I1127 11:36:11.990461  165526 command_runner.go:130] > # 	"SETUID",
	I1127 11:36:11.990468  165526 command_runner.go:130] > # 	"SETPCAP",
	I1127 11:36:11.990472  165526 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 11:36:11.990475  165526 command_runner.go:130] > # 	"KILL",
	I1127 11:36:11.990478  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990486  165526 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1127 11:36:11.990492  165526 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1127 11:36:11.990496  165526 command_runner.go:130] > # add_inheritable_capabilities = true
	I1127 11:36:11.990502  165526 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 11:36:11.990509  165526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 11:36:11.990513  165526 command_runner.go:130] > # default_sysctls = [
	I1127 11:36:11.990517  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990521  165526 command_runner.go:130] > # List of devices on the host that a
	I1127 11:36:11.990527  165526 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 11:36:11.990534  165526 command_runner.go:130] > # allowed_devices = [
	I1127 11:36:11.990537  165526 command_runner.go:130] > # 	"/dev/fuse",
	I1127 11:36:11.990541  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990545  165526 command_runner.go:130] > # List of additional devices. specified as
	I1127 11:36:11.990613  165526 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 11:36:11.990621  165526 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 11:36:11.990627  165526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 11:36:11.990633  165526 command_runner.go:130] > # additional_devices = [
	I1127 11:36:11.990636  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990641  165526 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 11:36:11.990645  165526 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 11:36:11.990648  165526 command_runner.go:130] > # 	"/etc/cdi",
	I1127 11:36:11.990652  165526 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 11:36:11.990657  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990662  165526 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 11:36:11.990668  165526 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 11:36:11.990672  165526 command_runner.go:130] > # Defaults to false.
	I1127 11:36:11.990677  165526 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 11:36:11.990683  165526 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 11:36:11.990688  165526 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 11:36:11.990692  165526 command_runner.go:130] > # hooks_dir = [
	I1127 11:36:11.990696  165526 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 11:36:11.990700  165526 command_runner.go:130] > # ]
	I1127 11:36:11.990707  165526 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 11:36:11.990713  165526 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 11:36:11.990718  165526 command_runner.go:130] > # its default mounts from the following two files:
	I1127 11:36:11.990721  165526 command_runner.go:130] > #
	I1127 11:36:11.990727  165526 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 11:36:11.990733  165526 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 11:36:11.990738  165526 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 11:36:11.990741  165526 command_runner.go:130] > #
	I1127 11:36:11.990747  165526 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 11:36:11.990753  165526 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 11:36:11.990759  165526 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 11:36:11.990764  165526 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 11:36:11.990767  165526 command_runner.go:130] > #
	I1127 11:36:11.990771  165526 command_runner.go:130] > # default_mounts_file = ""
	I1127 11:36:11.990776  165526 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 11:36:11.990782  165526 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 11:36:11.990786  165526 command_runner.go:130] > # pids_limit = 0
	I1127 11:36:11.990791  165526 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 11:36:11.990799  165526 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 11:36:11.990806  165526 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 11:36:11.990813  165526 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 11:36:11.990817  165526 command_runner.go:130] > # log_size_max = -1
	I1127 11:36:11.990823  165526 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 11:36:11.990829  165526 command_runner.go:130] > # log_to_journald = false
	I1127 11:36:11.990835  165526 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 11:36:11.990840  165526 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 11:36:11.990845  165526 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 11:36:11.990851  165526 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 11:36:11.990856  165526 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 11:36:11.990860  165526 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 11:36:11.990865  165526 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 11:36:11.990869  165526 command_runner.go:130] > # read_only = false
	I1127 11:36:11.990875  165526 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 11:36:11.990881  165526 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 11:36:11.990885  165526 command_runner.go:130] > # live configuration reload.
	I1127 11:36:11.990889  165526 command_runner.go:130] > # log_level = "info"
	I1127 11:36:11.990896  165526 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 11:36:11.990901  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:36:11.990904  165526 command_runner.go:130] > # log_filter = ""
	I1127 11:36:11.990910  165526 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 11:36:11.990916  165526 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 11:36:11.990919  165526 command_runner.go:130] > # separated by comma.
	I1127 11:36:11.990923  165526 command_runner.go:130] > # uid_mappings = ""
	I1127 11:36:11.990929  165526 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 11:36:11.990935  165526 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 11:36:11.990939  165526 command_runner.go:130] > # separated by comma.
	I1127 11:36:11.990942  165526 command_runner.go:130] > # gid_mappings = ""
	I1127 11:36:11.990948  165526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 11:36:11.990954  165526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 11:36:11.990959  165526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 11:36:11.990964  165526 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 11:36:11.990969  165526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 11:36:11.990975  165526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 11:36:11.990981  165526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 11:36:11.990988  165526 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 11:36:11.990994  165526 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 11:36:11.990999  165526 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 11:36:11.991005  165526 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 11:36:11.991008  165526 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 11:36:11.991014  165526 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 11:36:11.991035  165526 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 11:36:11.991043  165526 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 11:36:11.991048  165526 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 11:36:11.991052  165526 command_runner.go:130] > # drop_infra_ctr = true
	I1127 11:36:11.991058  165526 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 11:36:11.991064  165526 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 11:36:11.991071  165526 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 11:36:11.991075  165526 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 11:36:11.991081  165526 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 11:36:11.991085  165526 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 11:36:11.991089  165526 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 11:36:11.991096  165526 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 11:36:11.991102  165526 command_runner.go:130] > # pinns_path = ""
	I1127 11:36:11.991108  165526 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 11:36:11.991113  165526 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 11:36:11.991119  165526 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 11:36:11.991123  165526 command_runner.go:130] > # default_runtime = "runc"
	I1127 11:36:11.991128  165526 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 11:36:11.991135  165526 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 11:36:11.991144  165526 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 11:36:11.991149  165526 command_runner.go:130] > # creation as a file is not desired either.
	I1127 11:36:11.991160  165526 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 11:36:11.991165  165526 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 11:36:11.991169  165526 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 11:36:11.991172  165526 command_runner.go:130] > # ]
	I1127 11:36:11.991178  165526 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 11:36:11.991184  165526 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 11:36:11.991190  165526 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 11:36:11.991196  165526 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 11:36:11.991199  165526 command_runner.go:130] > #
	I1127 11:36:11.991206  165526 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 11:36:11.991211  165526 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 11:36:11.991215  165526 command_runner.go:130] > #  runtime_type = "oci"
	I1127 11:36:11.991219  165526 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 11:36:11.991224  165526 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 11:36:11.991228  165526 command_runner.go:130] > #  allowed_annotations = []
	I1127 11:36:11.991231  165526 command_runner.go:130] > # Where:
	I1127 11:36:11.991236  165526 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 11:36:11.991244  165526 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 11:36:11.991250  165526 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 11:36:11.991258  165526 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 11:36:11.991264  165526 command_runner.go:130] > #   in $PATH.
	I1127 11:36:11.991269  165526 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 11:36:11.991274  165526 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 11:36:11.991285  165526 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 11:36:11.991289  165526 command_runner.go:130] > #   state.
	I1127 11:36:11.991295  165526 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 11:36:11.991300  165526 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 11:36:11.991308  165526 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 11:36:11.991313  165526 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 11:36:11.991319  165526 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 11:36:11.991325  165526 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 11:36:11.991330  165526 command_runner.go:130] > #   The currently recognized values are:
	I1127 11:36:11.991336  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 11:36:11.991343  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 11:36:11.991348  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 11:36:11.991354  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 11:36:11.991361  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 11:36:11.991367  165526 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 11:36:11.991373  165526 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 11:36:11.991379  165526 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 11:36:11.991383  165526 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 11:36:11.991388  165526 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 11:36:11.991393  165526 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1127 11:36:11.991397  165526 command_runner.go:130] > runtime_type = "oci"
	I1127 11:36:11.991401  165526 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 11:36:11.991407  165526 command_runner.go:130] > runtime_config_path = ""
	I1127 11:36:11.991411  165526 command_runner.go:130] > monitor_path = ""
	I1127 11:36:11.991415  165526 command_runner.go:130] > monitor_cgroup = ""
	I1127 11:36:11.991419  165526 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 11:36:11.991482  165526 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 11:36:11.991486  165526 command_runner.go:130] > # running containers
	I1127 11:36:11.991490  165526 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 11:36:11.991496  165526 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 11:36:11.991505  165526 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 11:36:11.991510  165526 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 11:36:11.991515  165526 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 11:36:11.991521  165526 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 11:36:11.991525  165526 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 11:36:11.991529  165526 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 11:36:11.991534  165526 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 11:36:11.991538  165526 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 11:36:11.991544  165526 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 11:36:11.991549  165526 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 11:36:11.991557  165526 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 11:36:11.991564  165526 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 11:36:11.991571  165526 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 11:36:11.991576  165526 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 11:36:11.991585  165526 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 11:36:11.991593  165526 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 11:36:11.991598  165526 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 11:36:11.991605  165526 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 11:36:11.991608  165526 command_runner.go:130] > # Example:
	I1127 11:36:11.991613  165526 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 11:36:11.991617  165526 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 11:36:11.991622  165526 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 11:36:11.991627  165526 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 11:36:11.991630  165526 command_runner.go:130] > # cpuset = 0
	I1127 11:36:11.991634  165526 command_runner.go:130] > # cpushares = "0-1"
	I1127 11:36:11.991639  165526 command_runner.go:130] > # Where:
	I1127 11:36:11.991644  165526 command_runner.go:130] > # The workload name is workload-type.
	I1127 11:36:11.991650  165526 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 11:36:11.991658  165526 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 11:36:11.991682  165526 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 11:36:11.991702  165526 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 11:36:11.991710  165526 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 11:36:11.991715  165526 command_runner.go:130] > # 
	I1127 11:36:11.991725  165526 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 11:36:11.991733  165526 command_runner.go:130] > #
	I1127 11:36:11.991746  165526 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 11:36:11.991764  165526 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 11:36:11.991775  165526 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 11:36:11.991787  165526 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 11:36:11.991800  165526 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 11:36:11.991811  165526 command_runner.go:130] > [crio.image]
	I1127 11:36:11.991822  165526 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 11:36:11.991834  165526 command_runner.go:130] > # default_transport = "docker://"
	I1127 11:36:11.991845  165526 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 11:36:11.991859  165526 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 11:36:11.991870  165526 command_runner.go:130] > # global_auth_file = ""
	I1127 11:36:11.991883  165526 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 11:36:11.991896  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:36:11.991907  165526 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 11:36:11.991919  165526 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 11:36:11.991932  165526 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 11:36:11.991941  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:36:11.991952  165526 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 11:36:11.991963  165526 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 11:36:11.991977  165526 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 11:36:11.991991  165526 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 11:36:11.992005  165526 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 11:36:11.992014  165526 command_runner.go:130] > # pause_command = "/pause"
	I1127 11:36:11.992028  165526 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 11:36:11.992042  165526 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 11:36:11.992056  165526 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 11:36:11.992070  165526 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 11:36:11.992082  165526 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 11:36:11.992093  165526 command_runner.go:130] > # signature_policy = ""
	I1127 11:36:11.992143  165526 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 11:36:11.992156  165526 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 11:36:11.992163  165526 command_runner.go:130] > # changing them here.
	I1127 11:36:11.992171  165526 command_runner.go:130] > # insecure_registries = [
	I1127 11:36:11.992180  165526 command_runner.go:130] > # ]
	I1127 11:36:11.992192  165526 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 11:36:11.992205  165526 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 11:36:11.992219  165526 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 11:36:11.992232  165526 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 11:36:11.992243  165526 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 11:36:11.992257  165526 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 11:36:11.992267  165526 command_runner.go:130] > # CNI plugins.
	I1127 11:36:11.992273  165526 command_runner.go:130] > [crio.network]
	I1127 11:36:11.992284  165526 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 11:36:11.992297  165526 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 11:36:11.992308  165526 command_runner.go:130] > # cni_default_network = ""
	I1127 11:36:11.992318  165526 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 11:36:11.992331  165526 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 11:36:11.992343  165526 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 11:36:11.992350  165526 command_runner.go:130] > # plugin_dirs = [
	I1127 11:36:11.992357  165526 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 11:36:11.992362  165526 command_runner.go:130] > # ]
	I1127 11:36:11.992372  165526 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 11:36:11.992382  165526 command_runner.go:130] > [crio.metrics]
	I1127 11:36:11.992388  165526 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 11:36:11.992399  165526 command_runner.go:130] > # enable_metrics = false
	I1127 11:36:11.992408  165526 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 11:36:11.992416  165526 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 11:36:11.992435  165526 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 11:36:11.992448  165526 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 11:36:11.992464  165526 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 11:36:11.992474  165526 command_runner.go:130] > # metrics_collectors = [
	I1127 11:36:11.992478  165526 command_runner.go:130] > # 	"operations",
	I1127 11:36:11.992486  165526 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 11:36:11.992493  165526 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 11:36:11.992498  165526 command_runner.go:130] > # 	"operations_errors",
	I1127 11:36:11.992507  165526 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 11:36:11.992512  165526 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 11:36:11.992517  165526 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 11:36:11.992523  165526 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 11:36:11.992528  165526 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 11:36:11.992534  165526 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 11:36:11.992538  165526 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 11:36:11.992544  165526 command_runner.go:130] > # 	"containers_oom_total",
	I1127 11:36:11.992548  165526 command_runner.go:130] > # 	"containers_oom",
	I1127 11:36:11.992555  165526 command_runner.go:130] > # 	"processes_defunct",
	I1127 11:36:11.992558  165526 command_runner.go:130] > # 	"operations_total",
	I1127 11:36:11.992563  165526 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 11:36:11.992568  165526 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 11:36:11.992572  165526 command_runner.go:130] > # 	"operations_errors_total",
	I1127 11:36:11.992576  165526 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 11:36:11.992583  165526 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 11:36:11.992590  165526 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 11:36:11.992597  165526 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 11:36:11.992603  165526 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 11:36:11.992610  165526 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 11:36:11.992613  165526 command_runner.go:130] > # ]
	I1127 11:36:11.992620  165526 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 11:36:11.992624  165526 command_runner.go:130] > # metrics_port = 9090
	I1127 11:36:11.992629  165526 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 11:36:11.992636  165526 command_runner.go:130] > # metrics_socket = ""
	I1127 11:36:11.992641  165526 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 11:36:11.992649  165526 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 11:36:11.992655  165526 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 11:36:11.992662  165526 command_runner.go:130] > # certificate on any modification event.
	I1127 11:36:11.992666  165526 command_runner.go:130] > # metrics_cert = ""
	I1127 11:36:11.992671  165526 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 11:36:11.992678  165526 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 11:36:11.992682  165526 command_runner.go:130] > # metrics_key = ""
	I1127 11:36:11.992690  165526 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 11:36:11.992695  165526 command_runner.go:130] > [crio.tracing]
	I1127 11:36:11.992702  165526 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 11:36:11.992709  165526 command_runner.go:130] > # enable_tracing = false
	I1127 11:36:11.992717  165526 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 11:36:11.992722  165526 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 11:36:11.992729  165526 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 11:36:11.992734  165526 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 11:36:11.992742  165526 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 11:36:11.992748  165526 command_runner.go:130] > [crio.stats]
	I1127 11:36:11.992754  165526 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 11:36:11.992761  165526 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 11:36:11.992765  165526 command_runner.go:130] > # stats_collection_period = 0
	I1127 11:36:11.992828  165526 command_runner.go:130] ! time="2023-11-27 11:36:11.986625275Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1127 11:36:11.992847  165526 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 11:36:11.992933  165526 cni.go:84] Creating CNI manager for ""
	I1127 11:36:11.992945  165526 cni.go:136] 1 nodes found, recommending kindnet
	I1127 11:36:11.992963  165526 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 11:36:11.992987  165526 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-780990 NodeName:multinode-780990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 11:36:11.993118  165526 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-780990"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 11:36:11.993180  165526 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-780990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 11:36:11.993233  165526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 11:36:12.001998  165526 command_runner.go:130] > kubeadm
	I1127 11:36:12.002029  165526 command_runner.go:130] > kubectl
	I1127 11:36:12.002033  165526 command_runner.go:130] > kubelet
	I1127 11:36:12.002067  165526 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 11:36:12.002132  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 11:36:12.010820  165526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1127 11:36:12.026918  165526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 11:36:12.042850  165526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1127 11:36:12.059077  165526 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1127 11:36:12.062417  165526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:36:12.072486  165526 certs.go:56] Setting up /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990 for IP: 192.168.58.2
	I1127 11:36:12.072534  165526 certs.go:190] acquiring lock for shared ca certs: {Name:mk5858a15575801c48b8e08b34d7442dd346ca1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.072697  165526 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key
	I1127 11:36:12.072751  165526 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key
	I1127 11:36:12.072812  165526 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key
	I1127 11:36:12.072831  165526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt with IP's: []
	I1127 11:36:12.113193  165526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt ...
	I1127 11:36:12.113225  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt: {Name:mk85b01f7b6b4d6a9ceff71d5e03456db296396c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.113397  165526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key ...
	I1127 11:36:12.113406  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key: {Name:mk01fa67b286fe264b85f47409653c55a1dc0914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.113482  165526 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key.cee25041
	I1127 11:36:12.113500  165526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 11:36:12.444498  165526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt.cee25041 ...
	I1127 11:36:12.444545  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt.cee25041: {Name:mk5bddbe000f28226e90ab31e1e14c54a31bc55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.444769  165526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key.cee25041 ...
	I1127 11:36:12.444788  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key.cee25041: {Name:mk843213475a06e3be6ea567dbdfab601aaaf4d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.444900  165526 certs.go:337] copying /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt
	I1127 11:36:12.445040  165526 certs.go:341] copying /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key
	I1127 11:36:12.445122  165526 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.key
	I1127 11:36:12.445143  165526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.crt with IP's: []
	I1127 11:36:12.745293  165526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.crt ...
	I1127 11:36:12.745324  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.crt: {Name:mk56a87cf45c1b3f785eec596c711b11eeddd6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.745519  165526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.key ...
	I1127 11:36:12.745536  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.key: {Name:mkad2f53210745af924c243cdcb8cccb6f89b4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:12.745629  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 11:36:12.745649  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 11:36:12.745659  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 11:36:12.745671  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 11:36:12.745686  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 11:36:12.745699  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 11:36:12.745711  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 11:36:12.745724  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 11:36:12.745786  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem (1338 bytes)
	W1127 11:36:12.745821  165526 certs.go:433] ignoring /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153_empty.pem, impossibly tiny 0 bytes
	I1127 11:36:12.745837  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 11:36:12.745867  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem (1082 bytes)
	I1127 11:36:12.745892  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem (1123 bytes)
	I1127 11:36:12.745914  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem (1675 bytes)
	I1127 11:36:12.745956  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:36:12.745982  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:36:12.745996  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem -> /usr/share/ca-certificates/79153.pem
	I1127 11:36:12.746011  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> /usr/share/ca-certificates/791532.pem
	I1127 11:36:12.746551  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 11:36:12.768150  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 11:36:12.788706  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 11:36:12.809176  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 11:36:12.829859  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 11:36:12.850536  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 11:36:12.871324  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 11:36:12.892779  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 11:36:12.914109  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 11:36:12.935187  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem --> /usr/share/ca-certificates/79153.pem (1338 bytes)
	I1127 11:36:12.956568  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /usr/share/ca-certificates/791532.pem (1708 bytes)
	I1127 11:36:12.977283  165526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 11:36:12.992692  165526 ssh_runner.go:195] Run: openssl version
	I1127 11:36:12.997512  165526 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1127 11:36:12.997745  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 11:36:13.006023  165526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:36:13.009172  165526 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 11:17 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:36:13.009199  165526 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 11:17 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:36:13.009235  165526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:36:13.015207  165526 command_runner.go:130] > b5213941
	I1127 11:36:13.015372  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 11:36:13.023700  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79153.pem && ln -fs /usr/share/ca-certificates/79153.pem /etc/ssl/certs/79153.pem"
	I1127 11:36:13.032036  165526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79153.pem
	I1127 11:36:13.034981  165526 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 11:23 /usr/share/ca-certificates/79153.pem
	I1127 11:36:13.035011  165526 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 11:23 /usr/share/ca-certificates/79153.pem
	I1127 11:36:13.035043  165526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79153.pem
	I1127 11:36:13.041016  165526 command_runner.go:130] > 51391683
	I1127 11:36:13.041141  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79153.pem /etc/ssl/certs/51391683.0"
	I1127 11:36:13.049599  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/791532.pem && ln -fs /usr/share/ca-certificates/791532.pem /etc/ssl/certs/791532.pem"
	I1127 11:36:13.058137  165526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791532.pem
	I1127 11:36:13.061479  165526 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 11:23 /usr/share/ca-certificates/791532.pem
	I1127 11:36:13.061508  165526 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 11:23 /usr/share/ca-certificates/791532.pem
	I1127 11:36:13.061549  165526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791532.pem
	I1127 11:36:13.067521  165526 command_runner.go:130] > 3ec20f2e
	I1127 11:36:13.067821  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/791532.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 11:36:13.076412  165526 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 11:36:13.079360  165526 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:36:13.079411  165526 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:36:13.079451  165526 kubeadm.go:404] StartCluster: {Name:multinode-780990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:36:13.079531  165526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 11:36:13.079567  165526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 11:36:13.112443  165526 cri.go:89] found id: ""
	I1127 11:36:13.112510  165526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 11:36:13.119816  165526 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1127 11:36:13.119853  165526 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1127 11:36:13.119859  165526 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1127 11:36:13.120544  165526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:36:13.128334  165526 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 11:36:13.128396  165526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:36:13.136155  165526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1127 11:36:13.136186  165526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1127 11:36:13.136201  165526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1127 11:36:13.136215  165526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:36:13.136255  165526 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:36:13.136305  165526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 11:36:13.179704  165526 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 11:36:13.179718  165526 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1127 11:36:13.180129  165526 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 11:36:13.180150  165526 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 11:36:13.215005  165526 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 11:36:13.215040  165526 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1127 11:36:13.215166  165526 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 11:36:13.215176  165526 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 11:36:13.215232  165526 kubeadm.go:322] OS: Linux
	I1127 11:36:13.215244  165526 command_runner.go:130] > OS: Linux
	I1127 11:36:13.215310  165526 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 11:36:13.215320  165526 command_runner.go:130] > CGROUPS_CPU: enabled
	I1127 11:36:13.215399  165526 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 11:36:13.215417  165526 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1127 11:36:13.215479  165526 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 11:36:13.215490  165526 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1127 11:36:13.215555  165526 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 11:36:13.215566  165526 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1127 11:36:13.215629  165526 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 11:36:13.215646  165526 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1127 11:36:13.215754  165526 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 11:36:13.215766  165526 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1127 11:36:13.215898  165526 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 11:36:13.215922  165526 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1127 11:36:13.215994  165526 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 11:36:13.216005  165526 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1127 11:36:13.216076  165526 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 11:36:13.216086  165526 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1127 11:36:13.276177  165526 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:36:13.276223  165526 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:36:13.276344  165526 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:36:13.276356  165526 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:36:13.276460  165526 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:36:13.276470  165526 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:36:13.467757  165526 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:36:13.470128  165526 out.go:204]   - Generating certificates and keys ...
	I1127 11:36:13.467869  165526 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:36:13.470279  165526 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 11:36:13.470296  165526 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1127 11:36:13.470372  165526 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 11:36:13.470382  165526 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1127 11:36:13.605938  165526 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 11:36:13.605971  165526 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 11:36:13.681467  165526 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 11:36:13.681506  165526 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1127 11:36:13.795712  165526 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 11:36:13.795748  165526 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1127 11:36:14.045244  165526 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 11:36:14.045278  165526 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1127 11:36:14.156720  165526 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 11:36:14.156750  165526 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1127 11:36:14.156884  165526 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-780990] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 11:36:14.156896  165526 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-780990] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 11:36:14.615371  165526 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 11:36:14.615403  165526 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1127 11:36:14.615523  165526 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-780990] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 11:36:14.615535  165526 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-780990] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 11:36:14.761577  165526 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 11:36:14.761604  165526 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 11:36:15.034631  165526 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 11:36:15.034597  165526 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 11:36:15.088832  165526 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 11:36:15.088856  165526 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1127 11:36:15.089027  165526 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:36:15.089052  165526 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:36:15.388490  165526 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:36:15.388530  165526 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:36:15.742038  165526 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:36:15.742075  165526 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:36:15.878138  165526 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:36:15.878163  165526 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:36:16.028895  165526 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:36:16.028925  165526 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:36:16.029435  165526 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:36:16.029432  165526 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:36:16.031482  165526 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:36:16.033878  165526 out.go:204]   - Booting up control plane ...
	I1127 11:36:16.031548  165526 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:36:16.033965  165526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:36:16.033983  165526 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:36:16.034088  165526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:36:16.034096  165526 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:36:16.034825  165526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:36:16.034849  165526 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:36:16.043102  165526 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:36:16.043120  165526 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:36:16.043991  165526 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:36:16.044018  165526 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:36:16.044070  165526 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 11:36:16.044082  165526 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 11:36:16.119152  165526 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:36:16.119175  165526 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:36:21.620968  165526 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501852 seconds
	I1127 11:36:21.621002  165526 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.501852 seconds
	I1127 11:36:21.621154  165526 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:36:21.621169  165526 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:36:21.632845  165526 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:36:21.632864  165526 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:36:22.153696  165526 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:36:22.153737  165526 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:36:22.153923  165526 kubeadm.go:322] [mark-control-plane] Marking the node multinode-780990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 11:36:22.153935  165526 command_runner.go:130] > [mark-control-plane] Marking the node multinode-780990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 11:36:22.664025  165526 kubeadm.go:322] [bootstrap-token] Using token: 1mv2u0.hz23nz2kw27jsxer
	I1127 11:36:22.665806  165526 out.go:204]   - Configuring RBAC rules ...
	I1127 11:36:22.664154  165526 command_runner.go:130] > [bootstrap-token] Using token: 1mv2u0.hz23nz2kw27jsxer
	I1127 11:36:22.665981  165526 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:36:22.666011  165526 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:36:22.670850  165526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 11:36:22.670880  165526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 11:36:22.678008  165526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:36:22.678033  165526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:36:22.680649  165526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:36:22.680672  165526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:36:22.683380  165526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:36:22.683403  165526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:36:22.686410  165526 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:36:22.686435  165526 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:36:22.696304  165526 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 11:36:22.696346  165526 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 11:36:22.931810  165526 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 11:36:22.931842  165526 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1127 11:36:23.075994  165526 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 11:36:23.076025  165526 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1127 11:36:23.141346  165526 kubeadm.go:322] 
	I1127 11:36:23.141431  165526 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 11:36:23.141471  165526 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1127 11:36:23.141522  165526 kubeadm.go:322] 
	I1127 11:36:23.141627  165526 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 11:36:23.141660  165526 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1127 11:36:23.141700  165526 kubeadm.go:322] 
	I1127 11:36:23.141745  165526 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 11:36:23.141756  165526 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1127 11:36:23.141853  165526 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:36:23.141862  165526 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:36:23.141924  165526 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:36:23.141933  165526 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:36:23.141938  165526 kubeadm.go:322] 
	I1127 11:36:23.142013  165526 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 11:36:23.142025  165526 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1127 11:36:23.142037  165526 kubeadm.go:322] 
	I1127 11:36:23.142142  165526 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 11:36:23.142171  165526 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 11:36:23.142178  165526 kubeadm.go:322] 
	I1127 11:36:23.142260  165526 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 11:36:23.142270  165526 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1127 11:36:23.142373  165526 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:36:23.142383  165526 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:36:23.142509  165526 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:36:23.142536  165526 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:36:23.142552  165526 kubeadm.go:322] 
	I1127 11:36:23.142678  165526 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 11:36:23.142699  165526 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1127 11:36:23.142788  165526 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 11:36:23.142797  165526 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1127 11:36:23.142803  165526 kubeadm.go:322] 
	I1127 11:36:23.142892  165526 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1mv2u0.hz23nz2kw27jsxer \
	I1127 11:36:23.142901  165526 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 1mv2u0.hz23nz2kw27jsxer \
	I1127 11:36:23.143021  165526 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 \
	I1127 11:36:23.143030  165526 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 \
	I1127 11:36:23.143056  165526 kubeadm.go:322] 	--control-plane 
	I1127 11:36:23.143065  165526 command_runner.go:130] > 	--control-plane 
	I1127 11:36:23.143071  165526 kubeadm.go:322] 
	I1127 11:36:23.143180  165526 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:36:23.143193  165526 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:36:23.143197  165526 kubeadm.go:322] 
	I1127 11:36:23.143285  165526 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1mv2u0.hz23nz2kw27jsxer \
	I1127 11:36:23.143291  165526 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1mv2u0.hz23nz2kw27jsxer \
	I1127 11:36:23.143421  165526 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 
	I1127 11:36:23.143427  165526 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 
	I1127 11:36:23.145707  165526 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 11:36:23.145713  165526 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 11:36:23.145928  165526 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:36:23.145952  165526 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:36:23.145976  165526 cni.go:84] Creating CNI manager for ""
	I1127 11:36:23.145992  165526 cni.go:136] 1 nodes found, recommending kindnet
	I1127 11:36:23.147880  165526 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 11:36:23.149436  165526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 11:36:23.154161  165526 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 11:36:23.154191  165526 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1127 11:36:23.154202  165526 command_runner.go:130] > Device: 33h/51d	Inode: 584907      Links: 1
	I1127 11:36:23.154213  165526 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 11:36:23.154222  165526 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1127 11:36:23.154231  165526 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1127 11:36:23.154238  165526 command_runner.go:130] > Change: 2023-11-27 11:17:13.015845700 +0000
	I1127 11:36:23.154252  165526 command_runner.go:130] >  Birth: 2023-11-27 11:17:12.991843248 +0000
	I1127 11:36:23.154312  165526 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 11:36:23.154327  165526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 11:36:23.173042  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 11:36:23.810885  165526 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1127 11:36:23.817878  165526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1127 11:36:23.824173  165526 command_runner.go:130] > serviceaccount/kindnet created
	I1127 11:36:23.833060  165526 command_runner.go:130] > daemonset.apps/kindnet created
	I1127 11:36:23.837320  165526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:36:23.837396  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:23.837415  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f minikube.k8s.io/name=multinode-780990 minikube.k8s.io/updated_at=2023_11_27T11_36_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:23.844156  165526 command_runner.go:130] > -16
	I1127 11:36:23.844192  165526 ops.go:34] apiserver oom_adj: -16
	I1127 11:36:23.942516  165526 command_runner.go:130] > node/multinode-780990 labeled
	I1127 11:36:23.942614  165526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1127 11:36:23.942730  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:24.003714  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:24.006641  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:24.070830  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:24.574308  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:24.637541  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:25.074079  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:25.135520  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:25.574705  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:25.639588  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:26.074159  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:26.135421  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:26.574603  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:26.636071  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:27.074045  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:27.135946  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:27.574478  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:27.637691  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:28.074092  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:28.139967  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:28.574556  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:28.636645  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:29.073984  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:29.137173  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:29.574348  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:29.636307  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:30.073895  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:30.135199  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:30.574306  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:30.636621  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:31.074262  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:31.139862  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:31.574477  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:31.638988  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:32.074694  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:32.139872  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:32.574172  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:32.637903  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:33.074551  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:33.137613  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:33.573781  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:33.635630  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:34.074265  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:34.135364  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:34.574452  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:34.636901  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:35.073986  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:35.139293  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:35.573881  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:35.843832  165526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 11:36:36.074239  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:36:36.135902  165526 command_runner.go:130] > NAME      SECRETS   AGE
	I1127 11:36:36.135929  165526 command_runner.go:130] > default   0         1s
	I1127 11:36:36.138242  165526 kubeadm.go:1081] duration metric: took 12.300911703s to wait for elevateKubeSystemPrivileges.
	I1127 11:36:36.138282  165526 kubeadm.go:406] StartCluster complete in 23.058835284s
	I1127 11:36:36.138304  165526 settings.go:142] acquiring lock: {Name:mkff9c1e77c1a71ba60e8e9acbffbd8799fc8519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:36.138373  165526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:36:36.138970  165526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-72381/kubeconfig: {Name:mke9c53ad28720f96b51e42e525b68d1097488ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:36:36.139182  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:36:36.139339  165526 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 11:36:36.139405  165526 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:36:36.139441  165526 addons.go:69] Setting storage-provisioner=true in profile "multinode-780990"
	I1127 11:36:36.139450  165526 addons.go:69] Setting default-storageclass=true in profile "multinode-780990"
	I1127 11:36:36.139465  165526 addons.go:231] Setting addon storage-provisioner=true in "multinode-780990"
	I1127 11:36:36.139472  165526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-780990"
	I1127 11:36:36.139532  165526 host.go:66] Checking if "multinode-780990" exists ...
	I1127 11:36:36.139572  165526 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:36:36.139906  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:36:36.139897  165526 kapi.go:59] client config for multinode-780990: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:36:36.140026  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:36:36.140650  165526 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 11:36:36.140819  165526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 11:36:36.140835  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.140843  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.140849  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.151322  165526 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1127 11:36:36.151350  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.151362  165526 round_trippers.go:580]     Content-Length: 291
	I1127 11:36:36.151371  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.151379  165526 round_trippers.go:580]     Audit-Id: fcef2100-65a0-4b1a-95de-c4ff84a5a4a8
	I1127 11:36:36.151388  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.151396  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.151408  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.151415  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.151446  165526 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"807b8525-b261-47f5-a79c-105cde32cffa","resourceVersion":"344","creationTimestamp":"2023-11-27T11:36:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 11:36:36.152006  165526 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"807b8525-b261-47f5-a79c-105cde32cffa","resourceVersion":"344","creationTimestamp":"2023-11-27T11:36:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 11:36:36.152080  165526 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 11:36:36.152096  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.152107  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.152120  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.152133  165526 round_trippers.go:473]     Content-Type: application/json
	I1127 11:36:36.161459  165526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:36:36.160015  165526 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1127 11:36:36.162947  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.162963  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.162972  165526 round_trippers.go:580]     Content-Length: 291
	I1127 11:36:36.162986  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.162996  165526 round_trippers.go:580]     Audit-Id: a65add07-96c6-49d8-b6dd-4c74e45d30c6
	I1127 11:36:36.163004  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.163012  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.163020  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.163049  165526 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:36:36.163071  165526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:36:36.163132  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:36.163055  165526 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"807b8525-b261-47f5-a79c-105cde32cffa","resourceVersion":"348","creationTimestamp":"2023-11-27T11:36:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 11:36:36.163485  165526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 11:36:36.163500  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.163511  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.163526  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.168085  165526 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 11:36:36.168109  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.168120  165526 round_trippers.go:580]     Audit-Id: 5212c919-59df-43d2-aa5d-c1c2f907c81c
	I1127 11:36:36.168130  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.168145  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.168159  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.168169  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.168181  165526 round_trippers.go:580]     Content-Length: 291
	I1127 11:36:36.168192  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.168222  165526 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"807b8525-b261-47f5-a79c-105cde32cffa","resourceVersion":"348","creationTimestamp":"2023-11-27T11:36:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 11:36:36.168330  165526 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-780990" context rescaled to 1 replicas
	I1127 11:36:36.168365  165526 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 11:36:36.170599  165526 out.go:177] * Verifying Kubernetes components...
	I1127 11:36:36.168537  165526 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:36:36.172363  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:36:36.172613  165526 kapi.go:59] client config for multinode-780990: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:36:36.173002  165526 addons.go:231] Setting addon default-storageclass=true in "multinode-780990"
	I1127 11:36:36.173043  165526 host.go:66] Checking if "multinode-780990" exists ...
	I1127 11:36:36.173553  165526 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:36:36.183837  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:36.194567  165526 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:36:36.194597  165526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:36:36.194660  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:36:36.213599  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:36:36.254562  165526 command_runner.go:130] > apiVersion: v1
	I1127 11:36:36.254583  165526 command_runner.go:130] > data:
	I1127 11:36:36.254588  165526 command_runner.go:130] >   Corefile: |
	I1127 11:36:36.254592  165526 command_runner.go:130] >     .:53 {
	I1127 11:36:36.254595  165526 command_runner.go:130] >         errors
	I1127 11:36:36.254600  165526 command_runner.go:130] >         health {
	I1127 11:36:36.254606  165526 command_runner.go:130] >            lameduck 5s
	I1127 11:36:36.254613  165526 command_runner.go:130] >         }
	I1127 11:36:36.254619  165526 command_runner.go:130] >         ready
	I1127 11:36:36.254629  165526 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1127 11:36:36.254641  165526 command_runner.go:130] >            pods insecure
	I1127 11:36:36.254649  165526 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1127 11:36:36.254656  165526 command_runner.go:130] >            ttl 30
	I1127 11:36:36.254661  165526 command_runner.go:130] >         }
	I1127 11:36:36.254668  165526 command_runner.go:130] >         prometheus :9153
	I1127 11:36:36.254673  165526 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1127 11:36:36.254681  165526 command_runner.go:130] >            max_concurrent 1000
	I1127 11:36:36.254685  165526 command_runner.go:130] >         }
	I1127 11:36:36.254692  165526 command_runner.go:130] >         cache 30
	I1127 11:36:36.254698  165526 command_runner.go:130] >         loop
	I1127 11:36:36.254708  165526 command_runner.go:130] >         reload
	I1127 11:36:36.254719  165526 command_runner.go:130] >         loadbalance
	I1127 11:36:36.254726  165526 command_runner.go:130] >     }
	I1127 11:36:36.254736  165526 command_runner.go:130] > kind: ConfigMap
	I1127 11:36:36.254745  165526 command_runner.go:130] > metadata:
	I1127 11:36:36.254755  165526 command_runner.go:130] >   creationTimestamp: "2023-11-27T11:36:22Z"
	I1127 11:36:36.254762  165526 command_runner.go:130] >   name: coredns
	I1127 11:36:36.254767  165526 command_runner.go:130] >   namespace: kube-system
	I1127 11:36:36.254774  165526 command_runner.go:130] >   resourceVersion: "229"
	I1127 11:36:36.254779  165526 command_runner.go:130] >   uid: ad751700-f0db-45cb-a0ba-eb548c1ce121
	I1127 11:36:36.254918  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 11:36:36.255201  165526 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:36:36.255571  165526 kapi.go:59] client config for multinode-780990: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:36:36.255979  165526 node_ready.go:35] waiting up to 6m0s for node "multinode-780990" to be "Ready" ...
	I1127 11:36:36.256111  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:36.256119  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.256131  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.256141  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.258346  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:36.258370  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.258381  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.258391  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.258400  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.258411  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.258418  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.258429  165526 round_trippers.go:580]     Audit-Id: 927c0542-a0c8-4e11-8916-cb5cda117972
	I1127 11:36:36.258611  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:36.259305  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:36.259325  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.259335  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.259345  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.261517  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:36.261542  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.261559  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.261570  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.261579  165526 round_trippers.go:580]     Audit-Id: 3f629e6d-fa93-4156-b0da-a88ce70608cf
	I1127 11:36:36.261587  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.261597  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.261612  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.261757  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:36.361102  165526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:36:36.361701  165526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:36:36.762353  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:36.762377  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.762401  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.762411  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.764878  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:36.764909  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.764920  165526 round_trippers.go:580]     Audit-Id: c8ad4ae2-28b2-45ac-b8a4-8d05140229f3
	I1127 11:36:36.764929  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.764938  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.764947  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.764956  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.764965  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.766168  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:36.875468  165526 command_runner.go:130] > configmap/coredns replaced
	I1127 11:36:36.875522  165526 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1127 11:36:36.964175  165526 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1127 11:36:36.968556  165526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1127 11:36:36.968587  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.968598  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.968604  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.971190  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:36.971219  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.971230  165526 round_trippers.go:580]     Audit-Id: ea6a9326-c8d3-4954-9d3e-68bf8cf5fdc6
	I1127 11:36:36.971239  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.971248  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.971261  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.971274  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.971282  165526 round_trippers.go:580]     Content-Length: 1273
	I1127 11:36:36.971290  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.971568  165526 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"367"},"items":[{"metadata":{"name":"standard","uid":"d6ffdb57-5944-4632-b776-970bac7d86eb","resourceVersion":"366","creationTimestamp":"2023-11-27T11:36:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T11:36:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1127 11:36:36.972103  165526 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6ffdb57-5944-4632-b776-970bac7d86eb","resourceVersion":"366","creationTimestamp":"2023-11-27T11:36:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T11:36:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 11:36:36.972167  165526 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1127 11:36:36.972178  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:36.972190  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:36.972202  165526 round_trippers.go:473]     Content-Type: application/json
	I1127 11:36:36.972211  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:36.974658  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:36.974684  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:36.974695  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:36.974705  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:36.974711  165526 round_trippers.go:580]     Content-Length: 1220
	I1127 11:36:36.974718  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:36 GMT
	I1127 11:36:36.974726  165526 round_trippers.go:580]     Audit-Id: 294fb3b4-b9fe-4af2-b979-1c228135803f
	I1127 11:36:36.974732  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:36.974739  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:36.974774  165526 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6ffdb57-5944-4632-b776-970bac7d86eb","resourceVersion":"366","creationTimestamp":"2023-11-27T11:36:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T11:36:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 11:36:37.158437  165526 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1127 11:36:37.164280  165526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1127 11:36:37.173122  165526 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 11:36:37.180042  165526 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 11:36:37.187039  165526 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1127 11:36:37.194304  165526 command_runner.go:130] > pod/storage-provisioner created
	I1127 11:36:37.201653  165526 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1127 11:36:37.203041  165526 addons.go:502] enable addons completed in 1.063714947s: enabled=[default-storageclass storage-provisioner]
	I1127 11:36:37.263041  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:37.263064  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:37.263075  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:37.263084  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:37.265670  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:37.265702  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:37.265714  165526 round_trippers.go:580]     Audit-Id: 3b3593c8-b17d-4f93-81ca-98cff0b24467
	I1127 11:36:37.265720  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:37.265726  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:37.265731  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:37.265736  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:37.265741  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:37 GMT
	I1127 11:36:37.265902  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:37.762574  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:37.762615  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:37.762624  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:37.762630  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:37.764881  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:37.764908  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:37.764919  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:37.764926  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:37.764933  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:37.764940  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:37 GMT
	I1127 11:36:37.764947  165526 round_trippers.go:580]     Audit-Id: dbb207d9-ef9a-4698-90d4-a19211c11190
	I1127 11:36:37.764956  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:37.765109  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:38.262513  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:38.262543  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:38.262563  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:38.262571  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:38.266563  165526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 11:36:38.266586  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:38.266594  165526 round_trippers.go:580]     Audit-Id: 09bfacb2-122c-4819-bc7e-197d86c8fd13
	I1127 11:36:38.266599  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:38.266605  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:38.266610  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:38.266615  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:38.266620  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:38 GMT
	I1127 11:36:38.266744  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:38.267118  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:38.762365  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:38.762406  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:38.762416  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:38.762423  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:38.764809  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:38.764835  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:38.764858  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:38.764868  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:38.764884  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:38.764892  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:38 GMT
	I1127 11:36:38.764901  165526 round_trippers.go:580]     Audit-Id: e4f62dba-0803-4158-9914-087fc2696c47
	I1127 11:36:38.764916  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:38.765066  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:39.262415  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:39.262439  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:39.262447  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:39.262453  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:39.264596  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:39.264619  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:39.264629  165526 round_trippers.go:580]     Audit-Id: 0bad314d-95da-426f-9cd1-7cf2f86b8f18
	I1127 11:36:39.264637  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:39.264644  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:39.264651  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:39.264660  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:39.264671  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:39 GMT
	I1127 11:36:39.264827  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:39.762437  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:39.762462  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:39.762470  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:39.762476  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:39.764686  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:39.764713  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:39.764725  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:39 GMT
	I1127 11:36:39.764731  165526 round_trippers.go:580]     Audit-Id: 754ad90f-e53d-442a-87e3-479135ef5e52
	I1127 11:36:39.764737  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:39.764742  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:39.764748  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:39.764753  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:39.764906  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:40.262456  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:40.262480  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:40.262488  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:40.262494  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:40.264666  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:40.264686  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:40.264698  165526 round_trippers.go:580]     Audit-Id: c41a4348-c624-40da-bca9-ae78fbcd79b3
	I1127 11:36:40.264703  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:40.264716  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:40.264724  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:40.264732  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:40.264741  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:40 GMT
	I1127 11:36:40.264884  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:40.762453  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:40.762482  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:40.762496  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:40.762504  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:40.764812  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:40.764833  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:40.764840  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:40 GMT
	I1127 11:36:40.764846  165526 round_trippers.go:580]     Audit-Id: 8dfb513d-e90c-4fd6-84c3-689b53327629
	I1127 11:36:40.764851  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:40.764856  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:40.764863  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:40.764868  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:40.765002  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:40.765321  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:41.262672  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:41.262696  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:41.262706  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:41.262714  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:41.265034  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:41.265054  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:41.265061  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:41.265067  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:41.265072  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:41.265077  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:41 GMT
	I1127 11:36:41.265082  165526 round_trippers.go:580]     Audit-Id: 5881e435-0b5a-4f66-8e68-3bcf57a11b9a
	I1127 11:36:41.265087  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:41.265305  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:41.762966  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:41.762992  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:41.763000  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:41.763008  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:41.765444  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:41.765466  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:41.765473  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:41 GMT
	I1127 11:36:41.765479  165526 round_trippers.go:580]     Audit-Id: 944d0b16-92e3-4c0a-a628-5c1c0e543ac7
	I1127 11:36:41.765484  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:41.765489  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:41.765494  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:41.765501  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:41.765652  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:42.263370  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:42.263408  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:42.263416  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:42.263422  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:42.265685  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:42.265705  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:42.265712  165526 round_trippers.go:580]     Audit-Id: ad4e28bf-8fc1-4c67-b8b0-a898dc1ddfae
	I1127 11:36:42.265718  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:42.265723  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:42.265729  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:42.265734  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:42.265739  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:42 GMT
	I1127 11:36:42.265923  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:42.762989  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:42.763014  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:42.763023  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:42.763033  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:42.765399  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:42.765426  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:42.765436  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:42.765466  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:42 GMT
	I1127 11:36:42.765473  165526 round_trippers.go:580]     Audit-Id: aefee633-898d-4009-b19f-4ad44d1a539b
	I1127 11:36:42.765478  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:42.765484  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:42.765492  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:42.765621  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:42.766049  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:43.263304  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:43.263338  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:43.263346  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:43.263352  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:43.265658  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:43.265680  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:43.265690  165526 round_trippers.go:580]     Audit-Id: c4b56df9-1001-4353-b136-c5830b79218a
	I1127 11:36:43.265698  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:43.265706  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:43.265714  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:43.265726  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:43.265734  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:43 GMT
	I1127 11:36:43.265957  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:43.762374  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:43.762402  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:43.762410  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:43.762417  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:43.764889  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:43.764914  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:43.764923  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:43 GMT
	I1127 11:36:43.764930  165526 round_trippers.go:580]     Audit-Id: 88c9b017-53a7-44d6-836e-b79bd4dc967d
	I1127 11:36:43.764935  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:43.764941  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:43.764946  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:43.764952  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:43.765076  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:44.262421  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:44.262446  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:44.262455  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:44.262461  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:44.264899  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:44.264933  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:44.264943  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:44.264952  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:44.264959  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:44 GMT
	I1127 11:36:44.264966  165526 round_trippers.go:580]     Audit-Id: e73eaa5d-2609-4f02-8859-8341df105172
	I1127 11:36:44.264977  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:44.264985  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:44.265127  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:44.762720  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:44.762742  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:44.762750  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:44.762758  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:44.764883  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:44.764903  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:44.764910  165526 round_trippers.go:580]     Audit-Id: 4baa85b6-5cf0-49c7-bd6f-ff0bdcf6bbe4
	I1127 11:36:44.764915  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:44.764920  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:44.764925  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:44.764931  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:44.764936  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:44 GMT
	I1127 11:36:44.765090  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:45.262696  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:45.262728  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:45.262737  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:45.262743  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:45.264957  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:45.264976  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:45.264983  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:45.264989  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:45.264994  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:45.264999  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:45.265004  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:45 GMT
	I1127 11:36:45.265017  165526 round_trippers.go:580]     Audit-Id: 812a9286-c88b-42de-89ff-4d71910ecacd
	I1127 11:36:45.265121  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:45.265428  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:45.762447  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:45.762468  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:45.762477  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:45.762483  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:45.764845  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:45.764871  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:45.764881  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:45 GMT
	I1127 11:36:45.764889  165526 round_trippers.go:580]     Audit-Id: d9cc0977-438f-4428-89d5-7f4d6338bca2
	I1127 11:36:45.764897  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:45.764906  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:45.764915  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:45.764923  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:45.765033  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:46.262860  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:46.262893  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:46.262901  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:46.262907  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:46.265263  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:46.265285  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:46.265292  165526 round_trippers.go:580]     Audit-Id: 4826e627-5cea-4588-990f-c4d5bd9439f0
	I1127 11:36:46.265298  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:46.265304  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:46.265312  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:46.265320  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:46.265333  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:46 GMT
	I1127 11:36:46.265514  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:46.763186  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:46.763216  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:46.763226  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:46.763233  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:46.765641  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:46.765668  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:46.765680  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:46.765695  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:46.765703  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:46 GMT
	I1127 11:36:46.765711  165526 round_trippers.go:580]     Audit-Id: d4dbf6c4-9622-4e3a-88f3-b05a8251314c
	I1127 11:36:46.765721  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:46.765730  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:46.765874  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:47.262460  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:47.262489  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:47.262500  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:47.262516  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:47.265192  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:47.265215  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:47.265222  165526 round_trippers.go:580]     Audit-Id: cf89fb9a-af03-4a61-adfe-1766c1f63369
	I1127 11:36:47.265227  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:47.265232  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:47.265240  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:47.265245  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:47.265250  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:47 GMT
	I1127 11:36:47.265424  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:47.265902  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:47.763208  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:47.763231  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:47.763241  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:47.763250  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:47.765677  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:47.765702  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:47.765712  165526 round_trippers.go:580]     Audit-Id: 9c515fb9-4dfe-4724-a851-74f1a786472e
	I1127 11:36:47.765720  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:47.765728  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:47.765737  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:47.765746  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:47.765759  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:47 GMT
	I1127 11:36:47.765879  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:48.262419  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:48.262445  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:48.262453  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:48.262459  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:48.264856  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:48.264879  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:48.264888  165526 round_trippers.go:580]     Audit-Id: d9fdd47f-9a94-4478-bd22-a3fddcfcd962
	I1127 11:36:48.264897  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:48.264904  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:48.264912  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:48.264919  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:48.264929  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:48 GMT
	I1127 11:36:48.265060  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:48.762624  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:48.762650  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:48.762658  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:48.762665  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:48.764973  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:48.764995  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:48.765004  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:48.765012  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:48 GMT
	I1127 11:36:48.765019  165526 round_trippers.go:580]     Audit-Id: 5e434911-3e03-422e-abc8-de7e35cf2371
	I1127 11:36:48.765029  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:48.765038  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:48.765048  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:48.765220  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:49.262830  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:49.262853  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:49.262861  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:49.262867  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:49.265212  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:49.265230  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:49.265237  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:49.265245  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:49.265252  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:49.265264  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:49 GMT
	I1127 11:36:49.265271  165526 round_trippers.go:580]     Audit-Id: ebf7b80a-0142-44b5-8747-b4038e0afd82
	I1127 11:36:49.265282  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:49.265520  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:49.763231  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:49.763258  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:49.763271  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:49.763284  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:49.765684  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:49.765708  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:49.765717  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:49.765727  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:49 GMT
	I1127 11:36:49.765736  165526 round_trippers.go:580]     Audit-Id: bf989c34-5b84-45b2-af09-30b886b2e61d
	I1127 11:36:49.765749  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:49.765760  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:49.765769  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:49.765952  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:49.766319  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:50.262533  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:50.262556  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:50.262564  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:50.262570  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:50.264933  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:50.264960  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:50.264971  165526 round_trippers.go:580]     Audit-Id: 1634540e-3f93-47b4-ae4a-0edc3e4e6c34
	I1127 11:36:50.264987  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:50.264998  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:50.265004  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:50.265015  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:50.265023  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:50 GMT
	I1127 11:36:50.265139  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:50.762709  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:50.762731  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:50.762740  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:50.762746  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:50.765103  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:50.765129  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:50.765140  165526 round_trippers.go:580]     Audit-Id: 0f20dadc-2f3b-4c08-b1b3-83f36276c9b8
	I1127 11:36:50.765148  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:50.765153  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:50.765158  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:50.765166  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:50.765171  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:50 GMT
	I1127 11:36:50.765289  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:51.262925  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:51.262949  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:51.262957  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:51.262965  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:51.267767  165526 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 11:36:51.267791  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:51.267802  165526 round_trippers.go:580]     Audit-Id: 38269d3a-1f98-454f-a605-9dc8262c2f15
	I1127 11:36:51.267809  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:51.267816  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:51.267823  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:51.267831  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:51.267839  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:51 GMT
	I1127 11:36:51.268048  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:51.762580  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:51.762606  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:51.762614  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:51.762621  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:51.764991  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:51.765018  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:51.765027  165526 round_trippers.go:580]     Audit-Id: 47047d3c-f37b-4de1-bde1-326043e79013
	I1127 11:36:51.765035  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:51.765043  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:51.765050  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:51.765059  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:51.765069  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:51 GMT
	I1127 11:36:51.765185  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:52.262776  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:52.262801  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:52.262809  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:52.262816  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:52.265369  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:52.265392  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:52.265401  165526 round_trippers.go:580]     Audit-Id: 5717851e-cd58-4a6d-ae25-99554cb25bd7
	I1127 11:36:52.265408  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:52.265415  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:52.265422  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:52.265430  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:52.265442  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:52 GMT
	I1127 11:36:52.265604  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:52.265948  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:52.762687  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:52.762711  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:52.762722  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:52.762729  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:52.765144  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:52.765165  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:52.765173  165526 round_trippers.go:580]     Audit-Id: 3b6d0a34-2c43-47ce-800e-0eb928280df2
	I1127 11:36:52.765178  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:52.765183  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:52.765188  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:52.765193  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:52.765198  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:52 GMT
	I1127 11:36:52.765345  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:53.262954  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:53.262979  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:53.262987  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:53.262993  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:53.265315  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:53.265339  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:53.265349  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:53.265358  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:53.265366  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:53.265375  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:53 GMT
	I1127 11:36:53.265384  165526 round_trippers.go:580]     Audit-Id: b4a859d9-3f2a-4ba0-a7f3-2a0cd389c8d9
	I1127 11:36:53.265391  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:53.265561  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:53.763255  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:53.763280  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:53.763288  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:53.763294  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:53.765678  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:53.765702  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:53.765709  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:53.765714  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:53.765719  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:53.765725  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:53.765730  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:53 GMT
	I1127 11:36:53.765735  165526 round_trippers.go:580]     Audit-Id: 5d24602c-04cd-4853-9218-78c83e547116
	I1127 11:36:53.765849  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:54.262486  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:54.262516  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:54.262527  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:54.262537  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:54.265000  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:54.265024  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:54.265031  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:54.265036  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:54.265042  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:54 GMT
	I1127 11:36:54.265048  165526 round_trippers.go:580]     Audit-Id: ddba04d3-df78-45b9-ad88-21fa0d58b1f3
	I1127 11:36:54.265053  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:54.265059  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:54.265188  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:54.762799  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:54.762825  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:54.762833  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:54.762840  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:54.765229  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:54.765251  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:54.765258  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:54.765264  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:54.765269  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:54.765275  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:54 GMT
	I1127 11:36:54.765280  165526 round_trippers.go:580]     Audit-Id: 30f6c189-7cb7-4742-b72d-f6f2a254ab5c
	I1127 11:36:54.765285  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:54.765412  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:54.765755  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:55.262911  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:55.262934  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:55.262942  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:55.262948  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:55.265258  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:55.265282  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:55.265292  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:55.265300  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:55.265308  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:55.265316  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:55 GMT
	I1127 11:36:55.265324  165526 round_trippers.go:580]     Audit-Id: 28a8b7a7-fee3-446e-b10d-d0d25076fa40
	I1127 11:36:55.265331  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:55.265445  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:55.763095  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:55.763119  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:55.763127  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:55.763134  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:55.765479  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:55.765506  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:55.765516  165526 round_trippers.go:580]     Audit-Id: b0491165-644b-452c-b6d9-eacdbec83221
	I1127 11:36:55.765524  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:55.765532  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:55.765540  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:55.765549  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:55.765561  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:55 GMT
	I1127 11:36:55.765701  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:56.263284  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:56.263340  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:56.263349  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:56.263355  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:56.265737  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:56.265760  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:56.265767  165526 round_trippers.go:580]     Audit-Id: 81de2418-66b9-48d4-83af-93c1dcebf5c9
	I1127 11:36:56.265773  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:56.265778  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:56.265783  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:56.265788  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:56.265793  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:56 GMT
	I1127 11:36:56.265988  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:56.762569  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:56.762595  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:56.762604  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:56.762610  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:56.764889  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:56.764914  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:56.764924  165526 round_trippers.go:580]     Audit-Id: 8ee616a5-a193-4753-8ca3-5a18e78de308
	I1127 11:36:56.764932  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:56.764942  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:56.764950  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:56.764957  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:56.764969  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:56 GMT
	I1127 11:36:56.765177  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:57.262754  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:57.262780  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:57.262792  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:57.262800  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:57.265405  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:57.265432  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:57.265443  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:57 GMT
	I1127 11:36:57.265452  165526 round_trippers.go:580]     Audit-Id: a6aedd2d-e69a-4347-8338-822b689c1ab0
	I1127 11:36:57.265458  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:57.265463  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:57.265469  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:57.265477  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:57.265642  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:57.266000  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:36:57.763365  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:57.763387  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:57.763394  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:57.763400  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:57.765854  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:57.765879  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:57.765888  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:57.765897  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:57.765904  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:57.765913  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:57 GMT
	I1127 11:36:57.765922  165526 round_trippers.go:580]     Audit-Id: 7ab3848b-1323-42ba-b6a7-9ffd1c28e8f9
	I1127 11:36:57.765931  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:57.766090  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:58.262696  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:58.262723  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:58.262731  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:58.262737  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:58.265210  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:58.265232  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:58.265240  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:58.265247  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:58.265255  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:58.265262  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:58.265273  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:58 GMT
	I1127 11:36:58.265281  165526 round_trippers.go:580]     Audit-Id: 1217c25d-1183-4546-bd9a-57007afde93b
	I1127 11:36:58.265493  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:58.763186  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:58.763220  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:58.763231  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:58.763245  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:58.765644  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:58.765683  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:58.765694  165526 round_trippers.go:580]     Audit-Id: be5b7ff0-55dd-4593-9817-a77540211197
	I1127 11:36:58.765702  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:58.765710  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:58.765718  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:58.765727  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:58.765745  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:58 GMT
	I1127 11:36:58.765892  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:59.262407  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:59.262433  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:59.262441  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:59.262447  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:59.264838  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:59.264863  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:59.264872  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:59 GMT
	I1127 11:36:59.264879  165526 round_trippers.go:580]     Audit-Id: afe94c7c-283b-4f3d-bcfa-4e35952b305d
	I1127 11:36:59.264886  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:59.264895  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:59.264907  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:59.264919  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:59.265092  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:59.762754  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:36:59.762783  165526 round_trippers.go:469] Request Headers:
	I1127 11:36:59.762791  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:36:59.762797  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:36:59.765049  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:36:59.765082  165526 round_trippers.go:577] Response Headers:
	I1127 11:36:59.765092  165526 round_trippers.go:580]     Audit-Id: 979a8439-a99a-463c-b9c2-4d624c0cd5d4
	I1127 11:36:59.765100  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:36:59.765108  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:36:59.765115  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:36:59.765123  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:36:59.765135  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:36:59 GMT
	I1127 11:36:59.765272  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:36:59.765644  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:37:00.262943  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:00.262968  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:00.262982  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:00.262991  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:00.265232  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:00.265256  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:00.265263  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:00.265269  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:00.265277  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:00 GMT
	I1127 11:37:00.265282  165526 round_trippers.go:580]     Audit-Id: 1acc4d19-5023-4994-8a35-c3d7bf7b0474
	I1127 11:37:00.265297  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:00.265307  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:00.265545  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:00.763110  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:00.763137  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:00.763148  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:00.763156  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:00.765381  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:00.765401  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:00.765411  165526 round_trippers.go:580]     Audit-Id: 7e3eed5e-9c8c-460c-bfeb-d22bfa5a3759
	I1127 11:37:00.765418  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:00.765426  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:00.765450  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:00.765462  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:00.765471  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:00 GMT
	I1127 11:37:00.765637  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:01.263364  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:01.263392  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:01.263400  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:01.263406  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:01.265703  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:01.265725  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:01.265732  165526 round_trippers.go:580]     Audit-Id: 2d795638-8699-40a2-9efa-d6087ce230ba
	I1127 11:37:01.265738  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:01.265746  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:01.265755  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:01.265763  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:01.265772  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:01 GMT
	I1127 11:37:01.265958  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:01.762630  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:01.762658  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:01.762667  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:01.762673  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:01.765220  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:01.765249  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:01.765259  165526 round_trippers.go:580]     Audit-Id: 98ce022a-23ae-4c42-8953-96d95807e498
	I1127 11:37:01.765268  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:01.765276  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:01.765285  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:01.765293  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:01.765302  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:01 GMT
	I1127 11:37:01.765433  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:01.765802  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:37:02.263130  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:02.263160  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:02.263169  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:02.263175  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:02.265546  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:02.265575  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:02.265586  165526 round_trippers.go:580]     Audit-Id: 76454658-92f0-4d5d-8719-6a7d583208e1
	I1127 11:37:02.265595  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:02.265605  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:02.265613  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:02.265623  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:02.265635  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:02 GMT
	I1127 11:37:02.265776  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:02.763053  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:02.763078  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:02.763086  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:02.763092  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:02.765665  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:02.765689  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:02.765696  165526 round_trippers.go:580]     Audit-Id: 8e69f8b2-9167-44f9-9187-defc947ef98f
	I1127 11:37:02.765704  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:02.765709  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:02.765714  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:02.765719  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:02.765726  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:02 GMT
	I1127 11:37:02.765892  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:03.262443  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:03.262474  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:03.262487  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:03.262497  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:03.264976  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:03.265014  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:03.265026  165526 round_trippers.go:580]     Audit-Id: eb0b47ae-e201-4e69-85ca-c167e4b3cb6b
	I1127 11:37:03.265036  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:03.265045  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:03.265054  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:03.265063  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:03.265075  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:03 GMT
	I1127 11:37:03.265264  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:03.762806  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:03.762833  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:03.762842  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:03.762848  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:03.765351  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:03.765379  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:03.765387  165526 round_trippers.go:580]     Audit-Id: 2bd8897d-8f82-4a5a-aa59-f306c655bb10
	I1127 11:37:03.765393  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:03.765398  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:03.765404  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:03.765409  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:03.765414  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:03 GMT
	I1127 11:37:03.765523  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:03.765866  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:37:04.263255  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:04.263279  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:04.263288  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:04.263294  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:04.265730  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:04.265753  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:04.265763  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:04.265772  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:04 GMT
	I1127 11:37:04.265779  165526 round_trippers.go:580]     Audit-Id: 135b9e1b-3fe0-4d7b-b5d5-d80ecabf57e9
	I1127 11:37:04.265785  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:04.265793  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:04.265800  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:04.265913  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:04.762456  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:04.762482  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:04.762491  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:04.762497  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:04.764843  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:04.764867  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:04.764877  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:04 GMT
	I1127 11:37:04.764884  165526 round_trippers.go:580]     Audit-Id: f2b64dde-96a2-4128-ac26-cf613acd55ba
	I1127 11:37:04.764892  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:04.764901  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:04.764909  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:04.764919  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:04.765023  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:05.262646  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:05.262670  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:05.262679  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:05.262685  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:05.264942  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:05.264973  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:05.264984  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:05.264993  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:05.265005  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:05.265018  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:05 GMT
	I1127 11:37:05.265031  165526 round_trippers.go:580]     Audit-Id: fd34872a-c981-479c-b70e-e807c1ad132d
	I1127 11:37:05.265043  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:05.265159  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:05.762729  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:05.762763  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:05.762773  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:05.762784  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:05.765018  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:05.765041  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:05.765052  165526 round_trippers.go:580]     Audit-Id: 63635b57-ee58-4fcf-a0a7-b045b8dfb89a
	I1127 11:37:05.765059  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:05.765066  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:05.765074  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:05.765081  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:05.765091  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:05 GMT
	I1127 11:37:05.765223  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:06.263008  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:06.263030  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:06.263038  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:06.263044  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:06.265356  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:06.265384  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:06.265395  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:06.265405  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:06.265414  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:06 GMT
	I1127 11:37:06.265426  165526 round_trippers.go:580]     Audit-Id: c7d9f283-44b8-40b5-a27b-d1cb51c02713
	I1127 11:37:06.265434  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:06.265440  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:06.265602  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:06.265941  165526 node_ready.go:58] node "multinode-780990" has status "Ready":"False"
	I1127 11:37:06.763373  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:06.763400  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:06.763410  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:06.763419  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:06.765648  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:06.765669  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:06.765677  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:06.765682  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:06.765687  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:06.765692  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:06 GMT
	I1127 11:37:06.765697  165526 round_trippers.go:580]     Audit-Id: 583b1892-6dac-45f8-b159-ccc0fd7569e0
	I1127 11:37:06.765703  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:06.765913  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"301","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1127 11:37:07.263280  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:07.263307  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.263326  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.263334  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.265141  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:07.265164  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.265172  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.265177  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.265183  165526 round_trippers.go:580]     Audit-Id: d2aa8515-5023-42f1-9053-6667c4ce10ae
	I1127 11:37:07.265188  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.265193  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.265200  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.265318  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:07.265743  165526 node_ready.go:49] node "multinode-780990" has status "Ready":"True"
	I1127 11:37:07.265766  165526 node_ready.go:38] duration metric: took 31.009766284s waiting for node "multinode-780990" to be "Ready" ...
	I1127 11:37:07.265780  165526 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:37:07.265861  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:37:07.265883  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.265891  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.265900  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.272895  165526 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1127 11:37:07.272922  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.272933  165526 round_trippers.go:580]     Audit-Id: 3d08e1e5-7bb4-4936-9c10-4e65c11309b0
	I1127 11:37:07.272942  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.272949  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.272957  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.272964  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.272972  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.273462  165526 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"393","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54146 chars]
	I1127 11:37:07.277466  165526 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jsq5" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:07.277557  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4jsq5
	I1127 11:37:07.277565  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.277572  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.277579  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.279424  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:07.279441  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.279450  165526 round_trippers.go:580]     Audit-Id: b00a659e-3c26-4903-9fc3-98a66247eb57
	I1127 11:37:07.279459  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.279467  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.279474  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.279487  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.279496  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.279601  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"397","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 11:37:07.280019  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:07.280034  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.280041  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.280056  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.281614  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:07.281632  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.281642  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.281651  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.281662  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.281669  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.281680  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.281695  165526 round_trippers.go:580]     Audit-Id: 709c9319-d8ca-4a45-8e16-ac226361338f
	I1127 11:37:07.281820  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:07.282182  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4jsq5
	I1127 11:37:07.282196  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.282206  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.282212  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.283897  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:07.284281  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.284308  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.284339  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.284348  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.284356  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.284365  165526 round_trippers.go:580]     Audit-Id: dd4ae923-a787-48dc-804b-4620b69987d5
	I1127 11:37:07.284373  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.284549  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"397","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 11:37:07.285157  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:07.285168  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.285179  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.285188  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.287042  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:07.287064  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.287072  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.287079  165526 round_trippers.go:580]     Audit-Id: 141edef6-03d3-4934-a34f-9ae682f8e948
	I1127 11:37:07.287088  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.287098  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.287110  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.287122  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.287255  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:07.787983  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4jsq5
	I1127 11:37:07.788012  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.788020  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.788026  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.790210  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:07.790231  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.790238  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.790244  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.790250  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.790255  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.790260  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.790265  165526 round_trippers.go:580]     Audit-Id: b04666f3-7321-4faa-8fb3-7c0772368e3e
	I1127 11:37:07.790402  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"397","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 11:37:07.790984  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:07.791002  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:07.791015  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:07.791029  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:07.792883  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:07.792910  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:07.792917  165526 round_trippers.go:580]     Audit-Id: e9e07e41-6f98-4a11-8206-14f891707fe0
	I1127 11:37:07.792922  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:07.792927  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:07.792933  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:07.792938  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:07.792944  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:07 GMT
	I1127 11:37:07.793163  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:08.288736  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4jsq5
	I1127 11:37:08.288759  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.288767  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.288773  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.290946  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:08.290979  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.290989  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.290998  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.291008  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.291016  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.291025  165526 round_trippers.go:580]     Audit-Id: dae7d558-235b-4e61-b603-668365ae2037
	I1127 11:37:08.291040  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.291158  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"410","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1127 11:37:08.291614  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.291629  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.291636  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.291641  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.293485  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:08.293502  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.293508  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.293515  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.293523  165526 round_trippers.go:580]     Audit-Id: cfcb072e-5b25-4cf0-8d6d-b4ba119dae13
	I1127 11:37:08.293531  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.293545  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.293554  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.293676  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:08.294007  165526 pod_ready.go:92] pod "coredns-5dd5756b68-4jsq5" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:08.294026  165526 pod_ready.go:81] duration metric: took 1.016532322s waiting for pod "coredns-5dd5756b68-4jsq5" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.294036  165526 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.294090  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-780990
	I1127 11:37:08.294098  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.294105  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.294112  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.295859  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:08.295879  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.295888  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.295897  165526 round_trippers.go:580]     Audit-Id: 9abee9e6-b3aa-48da-9e50-dbf6cdfa10e4
	I1127 11:37:08.295909  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.295922  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.295935  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.295947  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.296038  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-780990","namespace":"kube-system","uid":"1502b7c7-223d-4753-8417-bcfa91c25b37","resourceVersion":"282","creationTimestamp":"2023-11-27T11:36:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"46e54cccbfa94a04c0955770423d5f05","kubernetes.io/config.mirror":"46e54cccbfa94a04c0955770423d5f05","kubernetes.io/config.seen":"2023-11-27T11:36:22.976528163Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1127 11:37:08.296373  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.296387  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.296397  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.296406  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.297955  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:08.297970  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.297976  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.297981  165526 round_trippers.go:580]     Audit-Id: acd23e21-68d3-46f3-a4f7-aa02d978c21e
	I1127 11:37:08.297986  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.297991  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.297996  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.298001  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.298120  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:08.298409  165526 pod_ready.go:92] pod "etcd-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:08.298423  165526 pod_ready.go:81] duration metric: took 4.375538ms waiting for pod "etcd-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.298435  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.298488  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-780990
	I1127 11:37:08.298496  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.298502  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.298509  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.300123  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:08.300136  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.300142  165526 round_trippers.go:580]     Audit-Id: 4c749661-d7d7-4fb0-b620-c57014e3d5e9
	I1127 11:37:08.300150  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.300156  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.300161  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.300166  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.300171  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.300292  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-780990","namespace":"kube-system","uid":"cbd45760-c484-4cb2-836c-2f14805b67dd","resourceVersion":"284","creationTimestamp":"2023-11-27T11:36:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"16e69e88ab42c0e4f329585035cb732a","kubernetes.io/config.mirror":"16e69e88ab42c0e4f329585035cb732a","kubernetes.io/config.seen":"2023-11-27T11:36:22.976529906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1127 11:37:08.300663  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.300675  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.300682  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.300688  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.302181  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:08.302196  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.302202  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.302207  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.302213  165526 round_trippers.go:580]     Audit-Id: 1b0aae64-5f2a-4f32-86a9-131c5d63f855
	I1127 11:37:08.302218  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.302226  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.302237  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.302336  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:08.302613  165526 pod_ready.go:92] pod "kube-apiserver-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:08.302626  165526 pod_ready.go:81] duration metric: took 4.180819ms waiting for pod "kube-apiserver-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.302634  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.302672  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-780990
	I1127 11:37:08.302679  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.302685  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.302691  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.304286  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:08.304300  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.304307  165526 round_trippers.go:580]     Audit-Id: 36bac9ed-a676-46ff-922a-dccb5c287812
	I1127 11:37:08.304312  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.304317  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.304323  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.304328  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.304336  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.304461  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-780990","namespace":"kube-system","uid":"f967b509-0a82-4a6d-badd-530f1c9d9761","resourceVersion":"281","creationTimestamp":"2023-11-27T11:36:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"14104bb059abd8436c9b45a2913e2f31","kubernetes.io/config.mirror":"14104bb059abd8436c9b45a2913e2f31","kubernetes.io/config.seen":"2023-11-27T11:36:16.715533663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1127 11:37:08.464128  165526 request.go:629] Waited for 159.297879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.464200  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.464212  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.464220  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.464228  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.466495  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:08.466517  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.466527  165526 round_trippers.go:580]     Audit-Id: 3253fafa-3729-4a90-9357-afb6549c9d2a
	I1127 11:37:08.466534  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.466542  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.466550  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.466560  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.466572  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.466721  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:08.467102  165526 pod_ready.go:92] pod "kube-controller-manager-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:08.467121  165526 pod_ready.go:81] duration metric: took 164.480046ms waiting for pod "kube-controller-manager-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.467139  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6lbv6" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.663537  165526 request.go:629] Waited for 196.29254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lbv6
	I1127 11:37:08.663613  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lbv6
	I1127 11:37:08.663618  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.663626  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.663632  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.666018  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:08.666038  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.666048  165526 round_trippers.go:580]     Audit-Id: aef5eee1-1849-459b-b140-7c0d0548d920
	I1127 11:37:08.666056  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.666064  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.666071  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.666082  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.666093  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.666266  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6lbv6","generateName":"kube-proxy-","namespace":"kube-system","uid":"3796fc28-e907-4af3-91f9-7aa0cb2bff44","resourceVersion":"370","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7161d318-270a-4bd9-be73-21d7f5329814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7161d318-270a-4bd9-be73-21d7f5329814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1127 11:37:08.864079  165526 request.go:629] Waited for 197.343051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.864151  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:08.864158  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:08.864166  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:08.864175  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:08.866385  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:08.866417  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:08.866428  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:08.866437  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:08 GMT
	I1127 11:37:08.866446  165526 round_trippers.go:580]     Audit-Id: 37ae89a5-df17-47fc-b83c-f987534d525b
	I1127 11:37:08.866455  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:08.866468  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:08.866475  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:08.866587  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:08.866909  165526 pod_ready.go:92] pod "kube-proxy-6lbv6" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:08.866924  165526 pod_ready.go:81] duration metric: took 399.77373ms waiting for pod "kube-proxy-6lbv6" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:08.866933  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:09.064226  165526 request.go:629] Waited for 197.227058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-780990
	I1127 11:37:09.064302  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-780990
	I1127 11:37:09.064310  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:09.064318  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:09.064325  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:09.066475  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:09.066496  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:09.066503  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:09.066508  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:09.066513  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:09.066519  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:09.066524  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:09 GMT
	I1127 11:37:09.066529  165526 round_trippers.go:580]     Audit-Id: 932b652e-5bc4-4461-a810-c10e7f6b9ef0
	I1127 11:37:09.066717  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-780990","namespace":"kube-system","uid":"a7b93896-e1d5-432e-8823-0015d815cd78","resourceVersion":"306","creationTimestamp":"2023-11-27T11:36:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d167e69bbb0a06d8435e369b8f69acdb","kubernetes.io/config.mirror":"d167e69bbb0a06d8435e369b8f69acdb","kubernetes.io/config.seen":"2023-11-27T11:36:22.976521732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1127 11:37:09.263431  165526 request.go:629] Waited for 196.294101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:09.263491  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:09.263496  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:09.263504  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:09.263510  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:09.265745  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:09.265768  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:09.265775  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:09.265780  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:09 GMT
	I1127 11:37:09.265786  165526 round_trippers.go:580]     Audit-Id: 76f31b8a-a10d-4766-b1ec-7caf6c3c71b4
	I1127 11:37:09.265791  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:09.265796  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:09.265801  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:09.265922  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1127 11:37:09.266254  165526 pod_ready.go:92] pod "kube-scheduler-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:09.266271  165526 pod_ready.go:81] duration metric: took 399.326886ms waiting for pod "kube-scheduler-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:09.266282  165526 pod_ready.go:38] duration metric: took 2.000484363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:37:09.266302  165526 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:37:09.266349  165526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:37:09.276377  165526 command_runner.go:130] > 1416
	I1127 11:37:09.277112  165526 api_server.go:72] duration metric: took 33.108709472s to wait for apiserver process to appear ...
	I1127 11:37:09.277134  165526 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:37:09.277150  165526 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 11:37:09.281950  165526 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 11:37:09.282077  165526 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1127 11:37:09.282087  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:09.282095  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:09.282103  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:09.282977  165526 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1127 11:37:09.282989  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:09.282995  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:09.283001  165526 round_trippers.go:580]     Content-Length: 264
	I1127 11:37:09.283006  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:09 GMT
	I1127 11:37:09.283012  165526 round_trippers.go:580]     Audit-Id: 4834ef72-04d7-4c7e-9cb7-e68cde055d27
	I1127 11:37:09.283020  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:09.283026  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:09.283033  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:09.283051  165526 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1127 11:37:09.283145  165526 api_server.go:141] control plane version: v1.28.4
	I1127 11:37:09.283164  165526 api_server.go:131] duration metric: took 6.024162ms to wait for apiserver health ...
	I1127 11:37:09.283172  165526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:37:09.463502  165526 request.go:629] Waited for 180.245628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:37:09.463585  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:37:09.463602  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:09.463610  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:09.463618  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:09.466774  165526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 11:37:09.466796  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:09.466803  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:09.466809  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:09.466815  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:09.466820  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:09 GMT
	I1127 11:37:09.466842  165526 round_trippers.go:580]     Audit-Id: 6b540938-4d04-4782-b837-c91580c35582
	I1127 11:37:09.466853  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:09.467328  165526 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"410","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1127 11:37:09.469049  165526 system_pods.go:59] 8 kube-system pods found
	I1127 11:37:09.469080  165526 system_pods.go:61] "coredns-5dd5756b68-4jsq5" [c4d42d52-2ac2-435b-a219-96b0b3934f2d] Running
	I1127 11:37:09.469087  165526 system_pods.go:61] "etcd-multinode-780990" [1502b7c7-223d-4753-8417-bcfa91c25b37] Running
	I1127 11:37:09.469094  165526 system_pods.go:61] "kindnet-vlzt4" [c758b029-c7c6-4cbb-be6a-d1f9a3a52e24] Running
	I1127 11:37:09.469099  165526 system_pods.go:61] "kube-apiserver-multinode-780990" [cbd45760-c484-4cb2-836c-2f14805b67dd] Running
	I1127 11:37:09.469106  165526 system_pods.go:61] "kube-controller-manager-multinode-780990" [f967b509-0a82-4a6d-badd-530f1c9d9761] Running
	I1127 11:37:09.469110  165526 system_pods.go:61] "kube-proxy-6lbv6" [3796fc28-e907-4af3-91f9-7aa0cb2bff44] Running
	I1127 11:37:09.469117  165526 system_pods.go:61] "kube-scheduler-multinode-780990" [a7b93896-e1d5-432e-8823-0015d815cd78] Running
	I1127 11:37:09.469121  165526 system_pods.go:61] "storage-provisioner" [1855f20f-5a70-4e9a-b202-bdc0f046497c] Running
	I1127 11:37:09.469134  165526 system_pods.go:74] duration metric: took 185.956601ms to wait for pod list to return data ...
	I1127 11:37:09.469141  165526 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:37:09.663436  165526 request.go:629] Waited for 194.185034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 11:37:09.663494  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 11:37:09.663500  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:09.663508  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:09.663514  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:09.665918  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:09.665939  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:09.665946  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:09 GMT
	I1127 11:37:09.665952  165526 round_trippers.go:580]     Audit-Id: b92b6e80-717a-4f90-80b5-af4cbf04050b
	I1127 11:37:09.665957  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:09.665962  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:09.665967  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:09.665972  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:09.665979  165526 round_trippers.go:580]     Content-Length: 261
	I1127 11:37:09.666002  165526 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9da85948-08de-48c1-a0ed-a8234f84cf57","resourceVersion":"343","creationTimestamp":"2023-11-27T11:36:35Z"}}]}
	I1127 11:37:09.666184  165526 default_sa.go:45] found service account: "default"
	I1127 11:37:09.666199  165526 default_sa.go:55] duration metric: took 197.0526ms for default service account to be created ...
	I1127 11:37:09.666208  165526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:37:09.863680  165526 request.go:629] Waited for 197.369893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:37:09.863744  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:37:09.863749  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:09.863757  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:09.863766  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:09.866811  165526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 11:37:09.866836  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:09.866846  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:09.866854  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:09.866863  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:09.866872  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:09 GMT
	I1127 11:37:09.866881  165526 round_trippers.go:580]     Audit-Id: 44ac7a54-58d3-4e19-ac33-5e9708f65717
	I1127 11:37:09.866887  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:09.867217  165526 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"410","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1127 11:37:09.868977  165526 system_pods.go:86] 8 kube-system pods found
	I1127 11:37:09.869000  165526 system_pods.go:89] "coredns-5dd5756b68-4jsq5" [c4d42d52-2ac2-435b-a219-96b0b3934f2d] Running
	I1127 11:37:09.869007  165526 system_pods.go:89] "etcd-multinode-780990" [1502b7c7-223d-4753-8417-bcfa91c25b37] Running
	I1127 11:37:09.869011  165526 system_pods.go:89] "kindnet-vlzt4" [c758b029-c7c6-4cbb-be6a-d1f9a3a52e24] Running
	I1127 11:37:09.869016  165526 system_pods.go:89] "kube-apiserver-multinode-780990" [cbd45760-c484-4cb2-836c-2f14805b67dd] Running
	I1127 11:37:09.869024  165526 system_pods.go:89] "kube-controller-manager-multinode-780990" [f967b509-0a82-4a6d-badd-530f1c9d9761] Running
	I1127 11:37:09.869028  165526 system_pods.go:89] "kube-proxy-6lbv6" [3796fc28-e907-4af3-91f9-7aa0cb2bff44] Running
	I1127 11:37:09.869032  165526 system_pods.go:89] "kube-scheduler-multinode-780990" [a7b93896-e1d5-432e-8823-0015d815cd78] Running
	I1127 11:37:09.869039  165526 system_pods.go:89] "storage-provisioner" [1855f20f-5a70-4e9a-b202-bdc0f046497c] Running
	I1127 11:37:09.869046  165526 system_pods.go:126] duration metric: took 202.832681ms to wait for k8s-apps to be running ...
	I1127 11:37:09.869059  165526 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:37:09.869103  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:37:09.881506  165526 system_svc.go:56] duration metric: took 12.435696ms WaitForService to wait for kubelet.
	I1127 11:37:09.881533  165526 kubeadm.go:581] duration metric: took 33.713137091s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:37:09.881568  165526 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:37:10.064039  165526 request.go:629] Waited for 182.372167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1127 11:37:10.064107  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1127 11:37:10.064112  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:10.064120  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:10.064127  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:10.066556  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:10.066576  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:10.066583  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:10.066589  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:10.066594  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:10.066601  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:10.066610  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:10 GMT
	I1127 11:37:10.066620  165526 round_trippers.go:580]     Audit-Id: 55fede1d-a662-4ea3-a636-998536b58f5d
	I1127 11:37:10.066740  165526 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"391","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6120 chars]
	I1127 11:37:10.067176  165526 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 11:37:10.067198  165526 node_conditions.go:123] node cpu capacity is 8
	I1127 11:37:10.067213  165526 node_conditions.go:105] duration metric: took 185.635073ms to run NodePressure ...
	I1127 11:37:10.067234  165526 start.go:228] waiting for startup goroutines ...
	I1127 11:37:10.067244  165526 start.go:233] waiting for cluster config update ...
	I1127 11:37:10.067256  165526 start.go:242] writing updated cluster config ...
	I1127 11:37:10.069572  165526 out.go:177] 
	I1127 11:37:10.071247  165526 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:37:10.071340  165526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/config.json ...
	I1127 11:37:10.073191  165526 out.go:177] * Starting worker node multinode-780990-m02 in cluster multinode-780990
	I1127 11:37:10.074424  165526 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:37:10.075920  165526 out.go:177] * Pulling base image ...
	I1127 11:37:10.077805  165526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:37:10.077833  165526 cache.go:56] Caching tarball of preloaded images
	I1127 11:37:10.077914  165526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:37:10.077946  165526 preload.go:174] Found /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 11:37:10.077961  165526 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 11:37:10.078105  165526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/config.json ...
	I1127 11:37:10.093239  165526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 11:37:10.093263  165526 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 11:37:10.093285  165526 cache.go:194] Successfully downloaded all kic artifacts
	I1127 11:37:10.093320  165526 start.go:365] acquiring machines lock for multinode-780990-m02: {Name:mkbbc925b084100e00383e9e628c7469da960445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:37:10.093427  165526 start.go:369] acquired machines lock for "multinode-780990-m02" in 86.778µs
	I1127 11:37:10.093455  165526 start.go:93] Provisioning new machine with config: &{Name:multinode-780990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 11:37:10.093547  165526 start.go:125] createHost starting for "m02" (driver="docker")
	I1127 11:37:10.095552  165526 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1127 11:37:10.095689  165526 start.go:159] libmachine.API.Create for "multinode-780990" (driver="docker")
	I1127 11:37:10.095723  165526 client.go:168] LocalClient.Create starting
	I1127 11:37:10.095809  165526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem
	I1127 11:37:10.095847  165526 main.go:141] libmachine: Decoding PEM data...
	I1127 11:37:10.095871  165526 main.go:141] libmachine: Parsing certificate...
	I1127 11:37:10.095937  165526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem
	I1127 11:37:10.095965  165526 main.go:141] libmachine: Decoding PEM data...
	I1127 11:37:10.095981  165526 main.go:141] libmachine: Parsing certificate...
	I1127 11:37:10.096185  165526 cli_runner.go:164] Run: docker network inspect multinode-780990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:37:10.111212  165526 network_create.go:77] Found existing network {name:multinode-780990 subnet:0xc0030ebd70 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1127 11:37:10.111255  165526 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-780990-m02" container
	I1127 11:37:10.111315  165526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 11:37:10.126075  165526 cli_runner.go:164] Run: docker volume create multinode-780990-m02 --label name.minikube.sigs.k8s.io=multinode-780990-m02 --label created_by.minikube.sigs.k8s.io=true
	I1127 11:37:10.141717  165526 oci.go:103] Successfully created a docker volume multinode-780990-m02
	I1127 11:37:10.141823  165526 cli_runner.go:164] Run: docker run --rm --name multinode-780990-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-780990-m02 --entrypoint /usr/bin/test -v multinode-780990-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 11:37:10.681123  165526 oci.go:107] Successfully prepared a docker volume multinode-780990-m02
	I1127 11:37:10.681167  165526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:37:10.681193  165526 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 11:37:10.681269  165526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-780990-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 11:37:15.784981  165526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-780990-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.103666723s)
	I1127 11:37:15.785015  165526 kic.go:203] duration metric: took 5.103818 seconds to extract preloaded images to volume
	W1127 11:37:15.785128  165526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 11:37:15.785210  165526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 11:37:15.838321  165526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-780990-m02 --name multinode-780990-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-780990-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-780990-m02 --network multinode-780990 --ip 192.168.58.3 --volume multinode-780990-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 11:37:16.152366  165526 cli_runner.go:164] Run: docker container inspect multinode-780990-m02 --format={{.State.Running}}
	I1127 11:37:16.170134  165526 cli_runner.go:164] Run: docker container inspect multinode-780990-m02 --format={{.State.Status}}
	I1127 11:37:16.186828  165526 cli_runner.go:164] Run: docker exec multinode-780990-m02 stat /var/lib/dpkg/alternatives/iptables
	I1127 11:37:16.223199  165526 oci.go:144] the created container "multinode-780990-m02" has a running status.
	I1127 11:37:16.223233  165526 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa...
	I1127 11:37:16.509779  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 11:37:16.509824  165526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 11:37:16.532464  165526 cli_runner.go:164] Run: docker container inspect multinode-780990-m02 --format={{.State.Status}}
	I1127 11:37:16.554764  165526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 11:37:16.554836  165526 kic_runner.go:114] Args: [docker exec --privileged multinode-780990-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 11:37:16.653139  165526 cli_runner.go:164] Run: docker container inspect multinode-780990-m02 --format={{.State.Status}}
	I1127 11:37:16.671875  165526 machine.go:88] provisioning docker machine ...
	I1127 11:37:16.671976  165526 ubuntu.go:169] provisioning hostname "multinode-780990-m02"
	I1127 11:37:16.672044  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:16.690525  165526 main.go:141] libmachine: Using SSH client type: native
	I1127 11:37:16.690999  165526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1127 11:37:16.691038  165526 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-780990-m02 && echo "multinode-780990-m02" | sudo tee /etc/hostname
	I1127 11:37:16.854058  165526 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-780990-m02
	
	I1127 11:37:16.854144  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:16.871524  165526 main.go:141] libmachine: Using SSH client type: native
	I1127 11:37:16.871972  165526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1127 11:37:16.871995  165526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-780990-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-780990-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-780990-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:37:16.995813  165526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:37:16.995853  165526 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17644-72381/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-72381/.minikube}
	I1127 11:37:16.995875  165526 ubuntu.go:177] setting up certificates
	I1127 11:37:16.995889  165526 provision.go:83] configureAuth start
	I1127 11:37:16.995946  165526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990-m02
	I1127 11:37:17.012300  165526 provision.go:138] copyHostCerts
	I1127 11:37:17.012349  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:37:17.012385  165526 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem, removing ...
	I1127 11:37:17.012397  165526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:37:17.012476  165526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem (1675 bytes)
	I1127 11:37:17.012578  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:37:17.012606  165526 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem, removing ...
	I1127 11:37:17.012615  165526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:37:17.012654  165526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem (1082 bytes)
	I1127 11:37:17.012715  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:37:17.012738  165526 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem, removing ...
	I1127 11:37:17.012747  165526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:37:17.012778  165526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem (1123 bytes)
	I1127 11:37:17.012845  165526 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem org=jenkins.multinode-780990-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-780990-m02]
	I1127 11:37:17.155395  165526 provision.go:172] copyRemoteCerts
	I1127 11:37:17.155455  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:37:17.155491  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:17.172307  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa Username:docker}
	I1127 11:37:17.259774  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 11:37:17.259845  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1127 11:37:17.281027  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 11:37:17.281085  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1127 11:37:17.302118  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 11:37:17.302175  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 11:37:17.322456  165526 provision.go:86] duration metric: configureAuth took 326.551291ms
	I1127 11:37:17.322488  165526 ubuntu.go:193] setting minikube options for container-runtime
	I1127 11:37:17.322696  165526 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:37:17.322823  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:17.339162  165526 main.go:141] libmachine: Using SSH client type: native
	I1127 11:37:17.339503  165526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1127 11:37:17.339527  165526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 11:37:17.544227  165526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 11:37:17.544257  165526 machine.go:91] provisioned docker machine in 872.295825ms
	I1127 11:37:17.544268  165526 client.go:171] LocalClient.Create took 7.448531959s
	I1127 11:37:17.544291  165526 start.go:167] duration metric: libmachine.API.Create for "multinode-780990" took 7.448602398s
	I1127 11:37:17.544301  165526 start.go:300] post-start starting for "multinode-780990-m02" (driver="docker")
	I1127 11:37:17.544315  165526 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:37:17.544420  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:37:17.544470  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:17.560518  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa Username:docker}
	I1127 11:37:17.648267  165526 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:37:17.651257  165526 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1127 11:37:17.651284  165526 command_runner.go:130] > NAME="Ubuntu"
	I1127 11:37:17.651292  165526 command_runner.go:130] > VERSION_ID="22.04"
	I1127 11:37:17.651300  165526 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1127 11:37:17.651311  165526 command_runner.go:130] > VERSION_CODENAME=jammy
	I1127 11:37:17.651322  165526 command_runner.go:130] > ID=ubuntu
	I1127 11:37:17.651329  165526 command_runner.go:130] > ID_LIKE=debian
	I1127 11:37:17.651340  165526 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1127 11:37:17.651349  165526 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1127 11:37:17.651360  165526 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1127 11:37:17.651376  165526 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1127 11:37:17.651386  165526 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1127 11:37:17.651435  165526 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 11:37:17.651470  165526 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 11:37:17.651488  165526 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 11:37:17.651501  165526 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 11:37:17.651518  165526 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/addons for local assets ...
	I1127 11:37:17.651582  165526 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/files for local assets ...
	I1127 11:37:17.651694  165526 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
	I1127 11:37:17.651707  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> /etc/ssl/certs/791532.pem
	I1127 11:37:17.651819  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:37:17.659232  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:37:17.679915  165526 start.go:303] post-start completed in 135.585088ms
	I1127 11:37:17.680238  165526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990-m02
	I1127 11:37:17.696668  165526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/config.json ...
	I1127 11:37:17.696922  165526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:37:17.696963  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:17.712581  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa Username:docker}
	I1127 11:37:17.796044  165526 command_runner.go:130] > 31%!
	(MISSING)I1127 11:37:17.796272  165526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 11:37:17.800160  165526 command_runner.go:130] > 203G
	I1127 11:37:17.800356  165526 start.go:128] duration metric: createHost completed in 7.706792814s
	I1127 11:37:17.800383  165526 start.go:83] releasing machines lock for "multinode-780990-m02", held for 7.706942349s
	I1127 11:37:17.800453  165526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990-m02
	I1127 11:37:17.818471  165526 out.go:177] * Found network options:
	I1127 11:37:17.820203  165526 out.go:177]   - NO_PROXY=192.168.58.2
	W1127 11:37:17.821741  165526 proxy.go:119] fail to check proxy env: Error ip not in block
	W1127 11:37:17.821775  165526 proxy.go:119] fail to check proxy env: Error ip not in block
	I1127 11:37:17.821834  165526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 11:37:17.821869  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:17.821933  165526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:37:17.821984  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:37:17.839229  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa Username:docker}
	I1127 11:37:17.842545  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa Username:docker}
	I1127 11:37:18.013364  165526 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 11:37:18.062277  165526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 11:37:18.066352  165526 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1127 11:37:18.066381  165526 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1127 11:37:18.066392  165526 command_runner.go:130] > Device: b0h/176d	Inode: 533119      Links: 1
	I1127 11:37:18.066399  165526 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 11:37:18.066405  165526 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1127 11:37:18.066410  165526 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1127 11:37:18.066415  165526 command_runner.go:130] > Change: 2023-11-27 11:17:12.627806055 +0000
	I1127 11:37:18.066422  165526 command_runner.go:130] >  Birth: 2023-11-27 11:17:12.627806055 +0000
	I1127 11:37:18.066591  165526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:37:18.083298  165526 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 11:37:18.083393  165526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:37:18.109196  165526 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1127 11:37:18.109253  165526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 11:37:18.109261  165526 start.go:472] detecting cgroup driver to use...
	I1127 11:37:18.109288  165526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 11:37:18.109329  165526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:37:18.122409  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:37:18.132191  165526 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:37:18.132248  165526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:37:18.144494  165526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:37:18.156707  165526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 11:37:18.238657  165526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:37:18.251502  165526 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 11:37:18.313862  165526 docker.go:219] disabling docker service ...
	I1127 11:37:18.313941  165526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:37:18.330747  165526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:37:18.340803  165526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:37:18.419086  165526 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 11:37:18.419169  165526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:37:18.504267  165526 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 11:37:18.504352  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:37:18.514468  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:37:18.528136  165526 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 11:37:18.528925  165526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 11:37:18.528975  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:37:18.537451  165526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 11:37:18.537505  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:37:18.545843  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:37:18.554021  165526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:37:18.561948  165526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:37:18.569517  165526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:37:18.575939  165526 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1127 11:37:18.576580  165526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:37:18.583869  165526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:37:18.661619  165526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 11:37:18.757820  165526 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 11:37:18.757897  165526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 11:37:18.761166  165526 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 11:37:18.761192  165526 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 11:37:18.761204  165526 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1127 11:37:18.761215  165526 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 11:37:18.761226  165526 command_runner.go:130] > Access: 2023-11-27 11:37:18.747034288 +0000
	I1127 11:37:18.761239  165526 command_runner.go:130] > Modify: 2023-11-27 11:37:18.747034288 +0000
	I1127 11:37:18.761257  165526 command_runner.go:130] > Change: 2023-11-27 11:37:18.747034288 +0000
	I1127 11:37:18.761267  165526 command_runner.go:130] >  Birth: -
	I1127 11:37:18.761314  165526 start.go:540] Will wait 60s for crictl version
	I1127 11:37:18.761375  165526 ssh_runner.go:195] Run: which crictl
	I1127 11:37:18.764322  165526 command_runner.go:130] > /usr/bin/crictl
	I1127 11:37:18.764388  165526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 11:37:18.793684  165526 command_runner.go:130] > Version:  0.1.0
	I1127 11:37:18.793709  165526 command_runner.go:130] > RuntimeName:  cri-o
	I1127 11:37:18.793717  165526 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1127 11:37:18.793726  165526 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 11:37:18.795617  165526 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 11:37:18.795714  165526 ssh_runner.go:195] Run: crio --version
	I1127 11:37:18.827863  165526 command_runner.go:130] > crio version 1.24.6
	I1127 11:37:18.827888  165526 command_runner.go:130] > Version:          1.24.6
	I1127 11:37:18.827895  165526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 11:37:18.827902  165526 command_runner.go:130] > GitTreeState:     clean
	I1127 11:37:18.827911  165526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 11:37:18.827919  165526 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 11:37:18.827926  165526 command_runner.go:130] > Compiler:         gc
	I1127 11:37:18.827933  165526 command_runner.go:130] > Platform:         linux/amd64
	I1127 11:37:18.827943  165526 command_runner.go:130] > Linkmode:         dynamic
	I1127 11:37:18.827958  165526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 11:37:18.827967  165526 command_runner.go:130] > SeccompEnabled:   true
	I1127 11:37:18.827973  165526 command_runner.go:130] > AppArmorEnabled:  false
	I1127 11:37:18.828065  165526 ssh_runner.go:195] Run: crio --version
	I1127 11:37:18.859776  165526 command_runner.go:130] > crio version 1.24.6
	I1127 11:37:18.859805  165526 command_runner.go:130] > Version:          1.24.6
	I1127 11:37:18.859815  165526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 11:37:18.859820  165526 command_runner.go:130] > GitTreeState:     clean
	I1127 11:37:18.859827  165526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 11:37:18.859832  165526 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 11:37:18.859837  165526 command_runner.go:130] > Compiler:         gc
	I1127 11:37:18.859845  165526 command_runner.go:130] > Platform:         linux/amd64
	I1127 11:37:18.859852  165526 command_runner.go:130] > Linkmode:         dynamic
	I1127 11:37:18.859863  165526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 11:37:18.859875  165526 command_runner.go:130] > SeccompEnabled:   true
	I1127 11:37:18.859885  165526 command_runner.go:130] > AppArmorEnabled:  false
	I1127 11:37:18.863337  165526 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 11:37:18.864881  165526 out.go:177]   - env NO_PROXY=192.168.58.2
	I1127 11:37:18.866329  165526 cli_runner.go:164] Run: docker network inspect multinode-780990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 11:37:18.882530  165526 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1127 11:37:18.886096  165526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:37:18.895993  165526 certs.go:56] Setting up /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990 for IP: 192.168.58.3
	I1127 11:37:18.896024  165526 certs.go:190] acquiring lock for shared ca certs: {Name:mk5858a15575801c48b8e08b34d7442dd346ca1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:37:18.896151  165526 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key
	I1127 11:37:18.896190  165526 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key
	I1127 11:37:18.896203  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 11:37:18.896217  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 11:37:18.896228  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 11:37:18.896240  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 11:37:18.896287  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem (1338 bytes)
	W1127 11:37:18.896314  165526 certs.go:433] ignoring /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153_empty.pem, impossibly tiny 0 bytes
	I1127 11:37:18.896324  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 11:37:18.896363  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem (1082 bytes)
	I1127 11:37:18.896385  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem (1123 bytes)
	I1127 11:37:18.896412  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem (1675 bytes)
	I1127 11:37:18.896448  165526 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:37:18.896477  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:37:18.896491  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem -> /usr/share/ca-certificates/79153.pem
	I1127 11:37:18.896503  165526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> /usr/share/ca-certificates/791532.pem
	I1127 11:37:18.896808  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 11:37:18.918729  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 11:37:18.940105  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 11:37:18.960682  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 11:37:18.980977  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 11:37:19.002196  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/79153.pem --> /usr/share/ca-certificates/79153.pem (1338 bytes)
	I1127 11:37:19.023181  165526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /usr/share/ca-certificates/791532.pem (1708 bytes)
	I1127 11:37:19.044468  165526 ssh_runner.go:195] Run: openssl version
	I1127 11:37:19.049409  165526 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1127 11:37:19.049481  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79153.pem && ln -fs /usr/share/ca-certificates/79153.pem /etc/ssl/certs/79153.pem"
	I1127 11:37:19.058243  165526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79153.pem
	I1127 11:37:19.061284  165526 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 11:23 /usr/share/ca-certificates/79153.pem
	I1127 11:37:19.061310  165526 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 11:23 /usr/share/ca-certificates/79153.pem
	I1127 11:37:19.061341  165526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79153.pem
	I1127 11:37:19.067245  165526 command_runner.go:130] > 51391683
	I1127 11:37:19.067460  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79153.pem /etc/ssl/certs/51391683.0"
	I1127 11:37:19.075394  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/791532.pem && ln -fs /usr/share/ca-certificates/791532.pem /etc/ssl/certs/791532.pem"
	I1127 11:37:19.083510  165526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791532.pem
	I1127 11:37:19.086633  165526 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 11:23 /usr/share/ca-certificates/791532.pem
	I1127 11:37:19.086670  165526 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 11:23 /usr/share/ca-certificates/791532.pem
	I1127 11:37:19.086699  165526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791532.pem
	I1127 11:37:19.092569  165526 command_runner.go:130] > 3ec20f2e
	I1127 11:37:19.092779  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/791532.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 11:37:19.101194  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 11:37:19.109554  165526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:37:19.112546  165526 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 11:17 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:37:19.112602  165526 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 11:17 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:37:19.112643  165526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:37:19.118416  165526 command_runner.go:130] > b5213941
	I1127 11:37:19.118628  165526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 11:37:19.126489  165526 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 11:37:19.129253  165526 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:37:19.129289  165526 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 11:37:19.129375  165526 ssh_runner.go:195] Run: crio config
	I1127 11:37:19.164935  165526 command_runner.go:130] ! time="2023-11-27 11:37:19.164563844Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1127 11:37:19.164968  165526 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 11:37:19.170860  165526 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 11:37:19.170885  165526 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 11:37:19.170895  165526 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 11:37:19.170899  165526 command_runner.go:130] > #
	I1127 11:37:19.170906  165526 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 11:37:19.170912  165526 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 11:37:19.170918  165526 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 11:37:19.170926  165526 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 11:37:19.170940  165526 command_runner.go:130] > # reload'.
	I1127 11:37:19.170951  165526 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 11:37:19.170964  165526 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 11:37:19.170980  165526 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 11:37:19.170988  165526 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 11:37:19.170995  165526 command_runner.go:130] > [crio]
	I1127 11:37:19.171001  165526 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 11:37:19.171008  165526 command_runner.go:130] > # containers images, in this directory.
	I1127 11:37:19.171016  165526 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1127 11:37:19.171030  165526 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 11:37:19.171043  165526 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1127 11:37:19.171056  165526 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 11:37:19.171069  165526 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 11:37:19.171079  165526 command_runner.go:130] > # storage_driver = "vfs"
	I1127 11:37:19.171089  165526 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 11:37:19.171098  165526 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 11:37:19.171102  165526 command_runner.go:130] > # storage_option = [
	I1127 11:37:19.171111  165526 command_runner.go:130] > # ]
	I1127 11:37:19.171126  165526 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 11:37:19.171139  165526 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 11:37:19.171149  165526 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 11:37:19.171163  165526 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 11:37:19.171177  165526 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 11:37:19.171182  165526 command_runner.go:130] > # always happen on a node reboot
	I1127 11:37:19.171187  165526 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 11:37:19.171196  165526 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 11:37:19.171206  165526 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 11:37:19.171219  165526 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 11:37:19.171232  165526 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 11:37:19.171244  165526 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 11:37:19.171260  165526 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 11:37:19.171268  165526 command_runner.go:130] > # internal_wipe = true
	I1127 11:37:19.171274  165526 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 11:37:19.171287  165526 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 11:37:19.171301  165526 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 11:37:19.171310  165526 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 11:37:19.171322  165526 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 11:37:19.171331  165526 command_runner.go:130] > [crio.api]
	I1127 11:37:19.171341  165526 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 11:37:19.171351  165526 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 11:37:19.171356  165526 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 11:37:19.171366  165526 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 11:37:19.171382  165526 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 11:37:19.171394  165526 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 11:37:19.171404  165526 command_runner.go:130] > # stream_port = "0"
	I1127 11:37:19.171413  165526 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 11:37:19.171423  165526 command_runner.go:130] > # stream_enable_tls = false
	I1127 11:37:19.171436  165526 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 11:37:19.171443  165526 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 11:37:19.171453  165526 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 11:37:19.171467  165526 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 11:37:19.171477  165526 command_runner.go:130] > # minutes.
	I1127 11:37:19.171484  165526 command_runner.go:130] > # stream_tls_cert = ""
	I1127 11:37:19.171494  165526 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 11:37:19.171508  165526 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 11:37:19.171518  165526 command_runner.go:130] > # stream_tls_key = ""
	I1127 11:37:19.171527  165526 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 11:37:19.171538  165526 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 11:37:19.171551  165526 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 11:37:19.171558  165526 command_runner.go:130] > # stream_tls_ca = ""
	I1127 11:37:19.171573  165526 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 11:37:19.171584  165526 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1127 11:37:19.171595  165526 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 11:37:19.171606  165526 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1127 11:37:19.171626  165526 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 11:37:19.171641  165526 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 11:37:19.171648  165526 command_runner.go:130] > [crio.runtime]
	I1127 11:37:19.171661  165526 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 11:37:19.171706  165526 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 11:37:19.171716  165526 command_runner.go:130] > # "nofile=1024:2048"
	I1127 11:37:19.171729  165526 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 11:37:19.171739  165526 command_runner.go:130] > # default_ulimits = [
	I1127 11:37:19.171743  165526 command_runner.go:130] > # ]
	I1127 11:37:19.171754  165526 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 11:37:19.171763  165526 command_runner.go:130] > # no_pivot = false
	I1127 11:37:19.171776  165526 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 11:37:19.171790  165526 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 11:37:19.171801  165526 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 11:37:19.171815  165526 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 11:37:19.171825  165526 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 11:37:19.171836  165526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 11:37:19.171842  165526 command_runner.go:130] > # conmon = ""
	I1127 11:37:19.171853  165526 command_runner.go:130] > # Cgroup setting for conmon
	I1127 11:37:19.171868  165526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 11:37:19.171878  165526 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 11:37:19.171892  165526 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 11:37:19.171903  165526 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 11:37:19.171913  165526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 11:37:19.171920  165526 command_runner.go:130] > # conmon_env = [
	I1127 11:37:19.171924  165526 command_runner.go:130] > # ]
	I1127 11:37:19.171939  165526 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 11:37:19.171951  165526 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 11:37:19.171963  165526 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 11:37:19.171977  165526 command_runner.go:130] > # default_env = [
	I1127 11:37:19.171986  165526 command_runner.go:130] > # ]
	I1127 11:37:19.171996  165526 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 11:37:19.172005  165526 command_runner.go:130] > # selinux = false
	I1127 11:37:19.172015  165526 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 11:37:19.172029  165526 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 11:37:19.172042  165526 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 11:37:19.172052  165526 command_runner.go:130] > # seccomp_profile = ""
	I1127 11:37:19.172065  165526 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 11:37:19.172077  165526 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 11:37:19.172088  165526 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 11:37:19.172095  165526 command_runner.go:130] > # which might increase security.
	I1127 11:37:19.172103  165526 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1127 11:37:19.172118  165526 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 11:37:19.172131  165526 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 11:37:19.172144  165526 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 11:37:19.172157  165526 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 11:37:19.172168  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:37:19.172176  165526 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 11:37:19.172184  165526 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 11:37:19.172194  165526 command_runner.go:130] > # the cgroup blockio controller.
	I1127 11:37:19.172202  165526 command_runner.go:130] > # blockio_config_file = ""
	I1127 11:37:19.172216  165526 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 11:37:19.172225  165526 command_runner.go:130] > # irqbalance daemon.
	I1127 11:37:19.172238  165526 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 11:37:19.172252  165526 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 11:37:19.172261  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:37:19.172266  165526 command_runner.go:130] > # rdt_config_file = ""
	I1127 11:37:19.172278  165526 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 11:37:19.172287  165526 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 11:37:19.172300  165526 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 11:37:19.172307  165526 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 11:37:19.172320  165526 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 11:37:19.172334  165526 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 11:37:19.172343  165526 command_runner.go:130] > # will be added.
	I1127 11:37:19.172348  165526 command_runner.go:130] > # default_capabilities = [
	I1127 11:37:19.172359  165526 command_runner.go:130] > # 	"CHOWN",
	I1127 11:37:19.172370  165526 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 11:37:19.172380  165526 command_runner.go:130] > # 	"FSETID",
	I1127 11:37:19.172390  165526 command_runner.go:130] > # 	"FOWNER",
	I1127 11:37:19.172396  165526 command_runner.go:130] > # 	"SETGID",
	I1127 11:37:19.172406  165526 command_runner.go:130] > # 	"SETUID",
	I1127 11:37:19.172413  165526 command_runner.go:130] > # 	"SETPCAP",
	I1127 11:37:19.172423  165526 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 11:37:19.172429  165526 command_runner.go:130] > # 	"KILL",
	I1127 11:37:19.172435  165526 command_runner.go:130] > # ]
	I1127 11:37:19.172444  165526 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1127 11:37:19.172459  165526 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1127 11:37:19.172470  165526 command_runner.go:130] > # add_inheritable_capabilities = true
	I1127 11:37:19.172483  165526 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 11:37:19.172493  165526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 11:37:19.172502  165526 command_runner.go:130] > # default_sysctls = [
	I1127 11:37:19.172507  165526 command_runner.go:130] > # ]
	I1127 11:37:19.172517  165526 command_runner.go:130] > # List of devices on the host that a
	I1127 11:37:19.172527  165526 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 11:37:19.172537  165526 command_runner.go:130] > # allowed_devices = [
	I1127 11:37:19.172547  165526 command_runner.go:130] > # 	"/dev/fuse",
	I1127 11:37:19.172553  165526 command_runner.go:130] > # ]
	I1127 11:37:19.172562  165526 command_runner.go:130] > # List of additional devices. specified as
	I1127 11:37:19.172592  165526 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 11:37:19.172604  165526 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 11:37:19.172614  165526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 11:37:19.172624  165526 command_runner.go:130] > # additional_devices = [
	I1127 11:37:19.172633  165526 command_runner.go:130] > # ]
	I1127 11:37:19.172644  165526 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 11:37:19.172654  165526 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 11:37:19.172664  165526 command_runner.go:130] > # 	"/etc/cdi",
	I1127 11:37:19.172671  165526 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 11:37:19.172680  165526 command_runner.go:130] > # ]
	I1127 11:37:19.172690  165526 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 11:37:19.172703  165526 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 11:37:19.172715  165526 command_runner.go:130] > # Defaults to false.
	I1127 11:37:19.172725  165526 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 11:37:19.172736  165526 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 11:37:19.172747  165526 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 11:37:19.172757  165526 command_runner.go:130] > # hooks_dir = [
	I1127 11:37:19.172765  165526 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 11:37:19.172771  165526 command_runner.go:130] > # ]
	I1127 11:37:19.172781  165526 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 11:37:19.172795  165526 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 11:37:19.172805  165526 command_runner.go:130] > # its default mounts from the following two files:
	I1127 11:37:19.172810  165526 command_runner.go:130] > #
	I1127 11:37:19.172822  165526 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 11:37:19.172834  165526 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 11:37:19.172846  165526 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 11:37:19.172855  165526 command_runner.go:130] > #
	I1127 11:37:19.172866  165526 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 11:37:19.172879  165526 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 11:37:19.172893  165526 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 11:37:19.172904  165526 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 11:37:19.172912  165526 command_runner.go:130] > #
	I1127 11:37:19.172916  165526 command_runner.go:130] > # default_mounts_file = ""
	I1127 11:37:19.172922  165526 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 11:37:19.172929  165526 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 11:37:19.172935  165526 command_runner.go:130] > # pids_limit = 0
	I1127 11:37:19.172941  165526 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 11:37:19.172950  165526 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 11:37:19.172956  165526 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 11:37:19.172970  165526 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 11:37:19.172977  165526 command_runner.go:130] > # log_size_max = -1
	I1127 11:37:19.172984  165526 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 11:37:19.172990  165526 command_runner.go:130] > # log_to_journald = false
	I1127 11:37:19.172997  165526 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 11:37:19.173004  165526 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 11:37:19.173010  165526 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 11:37:19.173017  165526 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 11:37:19.173022  165526 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 11:37:19.173026  165526 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 11:37:19.173039  165526 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 11:37:19.173045  165526 command_runner.go:130] > # read_only = false
	I1127 11:37:19.173052  165526 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 11:37:19.173060  165526 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 11:37:19.173065  165526 command_runner.go:130] > # live configuration reload.
	I1127 11:37:19.173071  165526 command_runner.go:130] > # log_level = "info"
	I1127 11:37:19.173077  165526 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 11:37:19.173084  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:37:19.173089  165526 command_runner.go:130] > # log_filter = ""
	I1127 11:37:19.173097  165526 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 11:37:19.173105  165526 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 11:37:19.173109  165526 command_runner.go:130] > # separated by comma.
	I1127 11:37:19.173116  165526 command_runner.go:130] > # uid_mappings = ""
	I1127 11:37:19.173122  165526 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 11:37:19.173129  165526 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 11:37:19.173135  165526 command_runner.go:130] > # separated by comma.
	I1127 11:37:19.173139  165526 command_runner.go:130] > # gid_mappings = ""
	I1127 11:37:19.173145  165526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 11:37:19.173154  165526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 11:37:19.173160  165526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 11:37:19.173167  165526 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 11:37:19.173173  165526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 11:37:19.173181  165526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 11:37:19.173187  165526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 11:37:19.173193  165526 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 11:37:19.173199  165526 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 11:37:19.173206  165526 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 11:37:19.173212  165526 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 11:37:19.173218  165526 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 11:37:19.173224  165526 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 11:37:19.173234  165526 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 11:37:19.173242  165526 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 11:37:19.173247  165526 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 11:37:19.173253  165526 command_runner.go:130] > # drop_infra_ctr = true
	I1127 11:37:19.173259  165526 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 11:37:19.173267  165526 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 11:37:19.173275  165526 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 11:37:19.173282  165526 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 11:37:19.173288  165526 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 11:37:19.173295  165526 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 11:37:19.173299  165526 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 11:37:19.173308  165526 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 11:37:19.173313  165526 command_runner.go:130] > # pinns_path = ""
	I1127 11:37:19.173318  165526 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 11:37:19.173325  165526 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 11:37:19.173333  165526 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 11:37:19.173338  165526 command_runner.go:130] > # default_runtime = "runc"
	I1127 11:37:19.173345  165526 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 11:37:19.173352  165526 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 11:37:19.173364  165526 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 11:37:19.173371  165526 command_runner.go:130] > # creation as a file is not desired either.
	I1127 11:37:19.173379  165526 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 11:37:19.173386  165526 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 11:37:19.173391  165526 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 11:37:19.173397  165526 command_runner.go:130] > # ]
	I1127 11:37:19.173403  165526 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 11:37:19.173411  165526 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 11:37:19.173417  165526 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 11:37:19.173423  165526 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 11:37:19.173429  165526 command_runner.go:130] > #
	I1127 11:37:19.173434  165526 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 11:37:19.173441  165526 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 11:37:19.173445  165526 command_runner.go:130] > #  runtime_type = "oci"
	I1127 11:37:19.173450  165526 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 11:37:19.173455  165526 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 11:37:19.173462  165526 command_runner.go:130] > #  allowed_annotations = []
	I1127 11:37:19.173466  165526 command_runner.go:130] > # Where:
	I1127 11:37:19.173471  165526 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 11:37:19.173480  165526 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 11:37:19.173486  165526 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 11:37:19.173494  165526 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 11:37:19.173498  165526 command_runner.go:130] > #   in $PATH.
	I1127 11:37:19.173505  165526 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 11:37:19.173512  165526 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 11:37:19.173518  165526 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 11:37:19.173522  165526 command_runner.go:130] > #   state.
	I1127 11:37:19.173530  165526 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 11:37:19.173536  165526 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 11:37:19.173544  165526 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 11:37:19.173550  165526 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 11:37:19.173558  165526 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 11:37:19.173565  165526 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 11:37:19.173572  165526 command_runner.go:130] > #   The currently recognized values are:
	I1127 11:37:19.173578  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 11:37:19.173588  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 11:37:19.173596  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 11:37:19.173602  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 11:37:19.173614  165526 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 11:37:19.173621  165526 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 11:37:19.173629  165526 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 11:37:19.173636  165526 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 11:37:19.173643  165526 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 11:37:19.173647  165526 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 11:37:19.173653  165526 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1127 11:37:19.173657  165526 command_runner.go:130] > runtime_type = "oci"
	I1127 11:37:19.173662  165526 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 11:37:19.173666  165526 command_runner.go:130] > runtime_config_path = ""
	I1127 11:37:19.173672  165526 command_runner.go:130] > monitor_path = ""
	I1127 11:37:19.173676  165526 command_runner.go:130] > monitor_cgroup = ""
	I1127 11:37:19.173682  165526 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 11:37:19.173709  165526 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 11:37:19.173715  165526 command_runner.go:130] > # running containers
	I1127 11:37:19.173720  165526 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 11:37:19.173727  165526 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 11:37:19.173736  165526 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 11:37:19.173742  165526 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 11:37:19.173749  165526 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 11:37:19.173754  165526 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 11:37:19.173761  165526 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 11:37:19.173766  165526 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 11:37:19.173773  165526 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 11:37:19.173778  165526 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 11:37:19.173786  165526 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 11:37:19.173792  165526 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 11:37:19.173800  165526 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 11:37:19.173807  165526 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 11:37:19.173817  165526 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 11:37:19.173823  165526 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 11:37:19.173834  165526 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 11:37:19.173842  165526 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 11:37:19.173850  165526 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 11:37:19.173858  165526 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 11:37:19.173864  165526 command_runner.go:130] > # Example:
	I1127 11:37:19.173869  165526 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 11:37:19.173876  165526 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 11:37:19.173881  165526 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 11:37:19.173889  165526 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 11:37:19.173893  165526 command_runner.go:130] > # cpuset = 0
	I1127 11:37:19.173900  165526 command_runner.go:130] > # cpushares = "0-1"
	I1127 11:37:19.173904  165526 command_runner.go:130] > # Where:
	I1127 11:37:19.173909  165526 command_runner.go:130] > # The workload name is workload-type.
	I1127 11:37:19.173917  165526 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 11:37:19.173923  165526 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 11:37:19.173929  165526 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 11:37:19.173937  165526 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 11:37:19.173943  165526 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 11:37:19.173948  165526 command_runner.go:130] > # 
	I1127 11:37:19.173956  165526 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 11:37:19.173962  165526 command_runner.go:130] > #
	I1127 11:37:19.173971  165526 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 11:37:19.173979  165526 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 11:37:19.173985  165526 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 11:37:19.173994  165526 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 11:37:19.174002  165526 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 11:37:19.174007  165526 command_runner.go:130] > [crio.image]
	I1127 11:37:19.174013  165526 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 11:37:19.174018  165526 command_runner.go:130] > # default_transport = "docker://"
	I1127 11:37:19.174026  165526 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 11:37:19.174033  165526 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 11:37:19.174039  165526 command_runner.go:130] > # global_auth_file = ""
	I1127 11:37:19.174044  165526 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 11:37:19.174051  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:37:19.174056  165526 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 11:37:19.174065  165526 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 11:37:19.174071  165526 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 11:37:19.174078  165526 command_runner.go:130] > # This option supports live configuration reload.
	I1127 11:37:19.174083  165526 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 11:37:19.174091  165526 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 11:37:19.174097  165526 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 11:37:19.174107  165526 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 11:37:19.174113  165526 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 11:37:19.174119  165526 command_runner.go:130] > # pause_command = "/pause"
	I1127 11:37:19.174125  165526 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 11:37:19.174134  165526 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 11:37:19.174141  165526 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 11:37:19.174149  165526 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 11:37:19.174155  165526 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 11:37:19.174161  165526 command_runner.go:130] > # signature_policy = ""
	I1127 11:37:19.174170  165526 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 11:37:19.174178  165526 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 11:37:19.174182  165526 command_runner.go:130] > # changing them here.
	I1127 11:37:19.174189  165526 command_runner.go:130] > # insecure_registries = [
	I1127 11:37:19.174193  165526 command_runner.go:130] > # ]
	I1127 11:37:19.174203  165526 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 11:37:19.174208  165526 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 11:37:19.174212  165526 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 11:37:19.174217  165526 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 11:37:19.174224  165526 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 11:37:19.174230  165526 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 11:37:19.174235  165526 command_runner.go:130] > # CNI plugins.
	I1127 11:37:19.174239  165526 command_runner.go:130] > [crio.network]
	I1127 11:37:19.174245  165526 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 11:37:19.174253  165526 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 11:37:19.174257  165526 command_runner.go:130] > # cni_default_network = ""
	I1127 11:37:19.174265  165526 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 11:37:19.174270  165526 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 11:37:19.174278  165526 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 11:37:19.174284  165526 command_runner.go:130] > # plugin_dirs = [
	I1127 11:37:19.174288  165526 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 11:37:19.174292  165526 command_runner.go:130] > # ]
	I1127 11:37:19.174297  165526 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 11:37:19.174302  165526 command_runner.go:130] > [crio.metrics]
	I1127 11:37:19.174307  165526 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 11:37:19.174311  165526 command_runner.go:130] > # enable_metrics = false
	I1127 11:37:19.174319  165526 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 11:37:19.174327  165526 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 11:37:19.174333  165526 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 11:37:19.174341  165526 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 11:37:19.174347  165526 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 11:37:19.174353  165526 command_runner.go:130] > # metrics_collectors = [
	I1127 11:37:19.174357  165526 command_runner.go:130] > # 	"operations",
	I1127 11:37:19.174364  165526 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 11:37:19.174369  165526 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 11:37:19.174376  165526 command_runner.go:130] > # 	"operations_errors",
	I1127 11:37:19.174380  165526 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 11:37:19.174387  165526 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 11:37:19.174391  165526 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 11:37:19.174398  165526 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 11:37:19.174402  165526 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 11:37:19.174409  165526 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 11:37:19.174413  165526 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 11:37:19.174417  165526 command_runner.go:130] > # 	"containers_oom_total",
	I1127 11:37:19.174424  165526 command_runner.go:130] > # 	"containers_oom",
	I1127 11:37:19.174428  165526 command_runner.go:130] > # 	"processes_defunct",
	I1127 11:37:19.174432  165526 command_runner.go:130] > # 	"operations_total",
	I1127 11:37:19.174439  165526 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 11:37:19.174444  165526 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 11:37:19.174450  165526 command_runner.go:130] > # 	"operations_errors_total",
	I1127 11:37:19.174455  165526 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 11:37:19.174461  165526 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 11:37:19.174466  165526 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 11:37:19.174472  165526 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 11:37:19.174477  165526 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 11:37:19.174483  165526 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 11:37:19.174486  165526 command_runner.go:130] > # ]
	I1127 11:37:19.174492  165526 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 11:37:19.174498  165526 command_runner.go:130] > # metrics_port = 9090
	I1127 11:37:19.174504  165526 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 11:37:19.174510  165526 command_runner.go:130] > # metrics_socket = ""
	I1127 11:37:19.174515  165526 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 11:37:19.174523  165526 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 11:37:19.174529  165526 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 11:37:19.174536  165526 command_runner.go:130] > # certificate on any modification event.
	I1127 11:37:19.174540  165526 command_runner.go:130] > # metrics_cert = ""
	I1127 11:37:19.174547  165526 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 11:37:19.174552  165526 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 11:37:19.174559  165526 command_runner.go:130] > # metrics_key = ""
	I1127 11:37:19.174565  165526 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 11:37:19.174571  165526 command_runner.go:130] > [crio.tracing]
	I1127 11:37:19.174577  165526 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 11:37:19.174583  165526 command_runner.go:130] > # enable_tracing = false
	I1127 11:37:19.174588  165526 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 11:37:19.174596  165526 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 11:37:19.174601  165526 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 11:37:19.174608  165526 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 11:37:19.174614  165526 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 11:37:19.174620  165526 command_runner.go:130] > [crio.stats]
	I1127 11:37:19.174625  165526 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 11:37:19.174633  165526 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 11:37:19.174637  165526 command_runner.go:130] > # stats_collection_period = 0
	I1127 11:37:19.174709  165526 cni.go:84] Creating CNI manager for ""
	I1127 11:37:19.174720  165526 cni.go:136] 2 nodes found, recommending kindnet
	I1127 11:37:19.174731  165526 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 11:37:19.174749  165526 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-780990 NodeName:multinode-780990-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 11:37:19.174856  165526 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-780990-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 11:37:19.174908  165526 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-780990-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 11:37:19.174953  165526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 11:37:19.183321  165526 command_runner.go:130] > kubeadm
	I1127 11:37:19.183342  165526 command_runner.go:130] > kubectl
	I1127 11:37:19.183348  165526 command_runner.go:130] > kubelet
	I1127 11:37:19.183369  165526 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 11:37:19.183409  165526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1127 11:37:19.191188  165526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1127 11:37:19.206985  165526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 11:37:19.223657  165526 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1127 11:37:19.226574  165526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:37:19.235828  165526 host.go:66] Checking if "multinode-780990" exists ...
	I1127 11:37:19.236022  165526 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:37:19.236051  165526 start.go:304] JoinCluster: &{Name:multinode-780990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-780990 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:37:19.236131  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1127 11:37:19.236172  165526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:37:19.252542  165526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:37:19.387774  165526 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7ep3so.s3zjldelppipel2l --discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 
	I1127 11:37:19.392018  165526 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 11:37:19.392067  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7ep3so.s3zjldelppipel2l --discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-780990-m02"
	I1127 11:37:19.425167  165526 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 11:37:19.452078  165526 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1127 11:37:19.452111  165526 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 11:37:19.452120  165526 command_runner.go:130] > OS: Linux
	I1127 11:37:19.452129  165526 command_runner.go:130] > CGROUPS_CPU: enabled
	I1127 11:37:19.452138  165526 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1127 11:37:19.452145  165526 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1127 11:37:19.452153  165526 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1127 11:37:19.452162  165526 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1127 11:37:19.452176  165526 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1127 11:37:19.452186  165526 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1127 11:37:19.452195  165526 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1127 11:37:19.452202  165526 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1127 11:37:19.527995  165526 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1127 11:37:19.528028  165526 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1127 11:37:19.551623  165526 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:37:19.551651  165526 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:37:19.551659  165526 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 11:37:19.625831  165526 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1127 11:37:21.638976  165526 command_runner.go:130] > This node has joined the cluster:
	I1127 11:37:21.639000  165526 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1127 11:37:21.639024  165526 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1127 11:37:21.639031  165526 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1127 11:37:21.641553  165526 command_runner.go:130] ! W1127 11:37:19.424766    1105 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1127 11:37:21.641590  165526 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 11:37:21.641616  165526 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:37:21.641648  165526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7ep3so.s3zjldelppipel2l --discovery-token-ca-cert-hash sha256:8a429d79c655c2807afe3f51b29d4e9332b2ae21312f3b8d4be03bf35a7ebe07 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-780990-m02": (2.24956572s)
	I1127 11:37:21.641672  165526 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1127 11:37:21.800813  165526 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1127 11:37:21.800852  165526 start.go:306] JoinCluster complete in 2.564798778s
	I1127 11:37:21.800867  165526 cni.go:84] Creating CNI manager for ""
	I1127 11:37:21.800874  165526 cni.go:136] 2 nodes found, recommending kindnet
	I1127 11:37:21.800922  165526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 11:37:21.804325  165526 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 11:37:21.804354  165526 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1127 11:37:21.804365  165526 command_runner.go:130] > Device: 33h/51d	Inode: 584907      Links: 1
	I1127 11:37:21.804373  165526 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 11:37:21.804378  165526 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1127 11:37:21.804383  165526 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1127 11:37:21.804389  165526 command_runner.go:130] > Change: 2023-11-27 11:17:13.015845700 +0000
	I1127 11:37:21.804395  165526 command_runner.go:130] >  Birth: 2023-11-27 11:17:12.991843248 +0000
	I1127 11:37:21.804443  165526 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 11:37:21.804453  165526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 11:37:21.821096  165526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 11:37:22.040033  165526 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1127 11:37:22.044040  165526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1127 11:37:22.047005  165526 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1127 11:37:22.057188  165526 command_runner.go:130] > daemonset.apps/kindnet configured
	I1127 11:37:22.061357  165526 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:37:22.061647  165526 kapi.go:59] client config for multinode-780990: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:37:22.062036  165526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 11:37:22.062052  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:22.062062  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:22.062071  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:22.064213  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:22.064232  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:22.064239  165526 round_trippers.go:580]     Audit-Id: 834238b4-bdb7-4c5b-8468-69845522c5e7
	I1127 11:37:22.064245  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:22.064250  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:22.064255  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:22.064262  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:22.064267  165526 round_trippers.go:580]     Content-Length: 291
	I1127 11:37:22.064279  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:22 GMT
	I1127 11:37:22.064298  165526 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"807b8525-b261-47f5-a79c-105cde32cffa","resourceVersion":"414","creationTimestamp":"2023-11-27T11:36:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1127 11:37:22.064388  165526 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-780990" context rescaled to 1 replicas
	I1127 11:37:22.064415  165526 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 11:37:22.067558  165526 out.go:177] * Verifying Kubernetes components...
	I1127 11:37:22.069089  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:37:22.079995  165526 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:37:22.080209  165526 kapi.go:59] client config for multinode-780990: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.crt", KeyFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/profiles/multinode-780990/client.key", CAFile:"/home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 11:37:22.080438  165526 node_ready.go:35] waiting up to 6m0s for node "multinode-780990-m02" to be "Ready" ...
	I1127 11:37:22.080501  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:22.080509  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:22.080517  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:22.080525  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:22.082983  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:22.083004  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:22.083013  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:22.083021  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:22.083029  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:22.083040  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:22 GMT
	I1127 11:37:22.083050  165526 round_trippers.go:580]     Audit-Id: ceaf945c-4cf3-4960-9a5b-e7c9aac8b602
	I1127 11:37:22.083061  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:22.083207  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:22.083579  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:22.083594  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:22.083601  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:22.083610  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:22.085360  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:22.085375  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:22.085381  165526 round_trippers.go:580]     Audit-Id: 9a7ea1d0-c56b-4d4d-b7a6-0b2787bf0578
	I1127 11:37:22.085387  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:22.085392  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:22.085397  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:22.085402  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:22.085407  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:22 GMT
	I1127 11:37:22.085550  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:22.586608  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:22.586627  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:22.586635  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:22.586641  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:22.589334  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:22.589363  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:22.589376  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:22.589387  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:22.589396  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:22.589404  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:22.589417  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:22 GMT
	I1127 11:37:22.589430  165526 round_trippers.go:580]     Audit-Id: b5fb0ec2-2941-4bae-b201-545f4201b014
	I1127 11:37:22.589574  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:23.086118  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:23.086143  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:23.086151  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:23.086158  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:23.088542  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:23.088565  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:23.088572  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:23 GMT
	I1127 11:37:23.088578  165526 round_trippers.go:580]     Audit-Id: ff6d38db-3a8f-47ea-b332-46053e4dfe3c
	I1127 11:37:23.088583  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:23.088591  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:23.088599  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:23.088607  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:23.088715  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:23.586332  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:23.586356  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:23.586364  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:23.586369  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:23.588479  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:23.588505  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:23.588516  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:23.588526  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:23.588533  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:23.588542  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:23.588554  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:23 GMT
	I1127 11:37:23.588562  165526 round_trippers.go:580]     Audit-Id: 81b25f60-b621-4bca-8e3a-6a0f771d6610
	I1127 11:37:23.588693  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:24.086283  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:24.086308  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:24.086316  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:24.086322  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:24.088638  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:24.088659  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:24.088669  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:24.088675  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:24.088680  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:24.088685  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:24 GMT
	I1127 11:37:24.088691  165526 round_trippers.go:580]     Audit-Id: 145b6bef-83c4-4765-b1d6-7f1a03b9871b
	I1127 11:37:24.088696  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:24.088839  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:24.089158  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:24.586443  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:24.586464  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:24.586472  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:24.586479  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:24.588788  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:24.588808  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:24.588815  165526 round_trippers.go:580]     Audit-Id: 2b12d719-5ba5-4950-b828-d7dc7a0f8505
	I1127 11:37:24.588820  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:24.588826  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:24.588831  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:24.588835  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:24.588840  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:24 GMT
	I1127 11:37:24.589082  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:25.086817  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:25.086839  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:25.086847  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:25.086853  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:25.089260  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:25.089288  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:25.089299  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:25 GMT
	I1127 11:37:25.089310  165526 round_trippers.go:580]     Audit-Id: b143f5f4-6b91-4b91-b8d8-52566d902618
	I1127 11:37:25.089318  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:25.089325  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:25.089333  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:25.089342  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:25.089450  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:25.586045  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:25.586067  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:25.586075  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:25.586081  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:25.588260  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:25.588292  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:25.588303  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:25.588311  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:25 GMT
	I1127 11:37:25.588319  165526 round_trippers.go:580]     Audit-Id: 2a75d18a-d368-422c-a5c7-6139570cfa67
	I1127 11:37:25.588331  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:25.588340  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:25.588351  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:25.588463  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"450","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 11:37:26.086043  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:26.086079  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:26.086087  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:26.086093  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:26.088514  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:26.088539  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:26.088558  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:26.088568  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:26.088579  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:26.088590  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:26.088601  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:26 GMT
	I1127 11:37:26.088611  165526 round_trippers.go:580]     Audit-Id: 37ef91e5-041c-404d-93a5-8da58fbfe560
	I1127 11:37:26.088767  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:26.586463  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:26.586484  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:26.586492  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:26.586498  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:26.588706  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:26.588729  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:26.588736  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:26.588742  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:26.588748  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:26 GMT
	I1127 11:37:26.588753  165526 round_trippers.go:580]     Audit-Id: 53be98ba-0e79-4aea-b4fa-d52075f95b05
	I1127 11:37:26.588759  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:26.588764  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:26.588879  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:26.589211  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:27.086402  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:27.086424  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:27.086432  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:27.086438  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:27.088682  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:27.088703  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:27.088710  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:27.088717  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:27.088723  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:27.088728  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:27 GMT
	I1127 11:37:27.088733  165526 round_trippers.go:580]     Audit-Id: a401d165-68fe-40fd-960b-70a7e8795bf7
	I1127 11:37:27.088738  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:27.088926  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:27.586700  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:27.586724  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:27.586732  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:27.586740  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:27.589002  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:27.589027  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:27.589036  165526 round_trippers.go:580]     Audit-Id: 8b4112b6-4c09-4f1a-9186-74787e189a42
	I1127 11:37:27.589045  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:27.589052  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:27.589059  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:27.589066  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:27.589074  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:27 GMT
	I1127 11:37:27.589209  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:28.086905  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:28.086933  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:28.086941  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:28.086949  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:28.089284  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:28.089306  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:28.089313  165526 round_trippers.go:580]     Audit-Id: e97599ed-0634-4ef1-a9a2-5f891273bc2d
	I1127 11:37:28.089319  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:28.089328  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:28.089336  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:28.089343  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:28.089350  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:28 GMT
	I1127 11:37:28.089472  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:28.586844  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:28.586873  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:28.586887  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:28.586896  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:28.589238  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:28.589260  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:28.589270  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:28 GMT
	I1127 11:37:28.589278  165526 round_trippers.go:580]     Audit-Id: a7eb7548-b300-4f10-8066-aebb676419fe
	I1127 11:37:28.589287  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:28.589296  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:28.589305  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:28.589314  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:28.589439  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:28.589753  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:29.086061  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:29.086090  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:29.086099  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:29.086105  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:29.088533  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:29.088559  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:29.088568  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:29.088576  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:29.088584  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:29 GMT
	I1127 11:37:29.088593  165526 round_trippers.go:580]     Audit-Id: 09d0f71e-2afa-4910-a97b-da8daba8d6be
	I1127 11:37:29.088602  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:29.088612  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:29.088731  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:29.586906  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:29.586933  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:29.586946  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:29.586954  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:29.589328  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:29.589356  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:29.589368  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:29.589377  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:29.589385  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:29.589393  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:29 GMT
	I1127 11:37:29.589403  165526 round_trippers.go:580]     Audit-Id: 5d739583-447c-45aa-8037-3aee39a350e9
	I1127 11:37:29.589413  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:29.589533  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:30.086121  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:30.086148  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:30.086159  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:30.086168  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:30.088471  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:30.088491  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:30.088498  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:30.088504  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:30.088510  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:30.088515  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:30.088521  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:30 GMT
	I1127 11:37:30.088528  165526 round_trippers.go:580]     Audit-Id: 2645f3f7-cf48-4c18-be66-44d5c84a7602
	I1127 11:37:30.088671  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:30.586250  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:30.586284  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:30.586292  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:30.586298  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:30.588552  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:30.588575  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:30.588584  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:30.588592  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:30.588601  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:30 GMT
	I1127 11:37:30.588609  165526 round_trippers.go:580]     Audit-Id: b0861013-8df6-4fc5-a4f3-81c50dbe381f
	I1127 11:37:30.588617  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:30.588625  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:30.588770  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:31.086351  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:31.086377  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:31.086385  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:31.086391  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:31.088785  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:31.088807  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:31.088817  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:31.088826  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:31.088835  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:31.088848  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:31 GMT
	I1127 11:37:31.088857  165526 round_trippers.go:580]     Audit-Id: 125ee6fb-7096-4799-9448-204c395a317b
	I1127 11:37:31.088868  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:31.089064  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:31.089422  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:31.586727  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:31.586750  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:31.586759  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:31.586765  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:31.589255  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:31.589281  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:31.589291  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:31.589299  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:31.589306  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:31 GMT
	I1127 11:37:31.589314  165526 round_trippers.go:580]     Audit-Id: 2c5062f3-252e-4599-90d7-638f5d3f57c6
	I1127 11:37:31.589322  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:31.589349  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:31.589462  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"469","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 11:37:32.086011  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:32.086035  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:32.086044  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:32.086052  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:32.088339  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:32.088367  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:32.088377  165526 round_trippers.go:580]     Audit-Id: 4eedb059-27b2-4667-9d47-e060baa78a2d
	I1127 11:37:32.088385  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:32.088394  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:32.088403  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:32.088412  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:32.088418  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:32 GMT
	I1127 11:37:32.088522  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:32.586520  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:32.586545  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:32.586559  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:32.586565  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:32.588963  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:32.588983  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:32.588991  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:32.588996  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:32.589002  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:32.589007  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:32 GMT
	I1127 11:37:32.589013  165526 round_trippers.go:580]     Audit-Id: 50660e84-a7e0-4ae7-a124-bd443346613a
	I1127 11:37:32.589018  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:32.589118  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:33.086818  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:33.086842  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:33.086850  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:33.086856  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:33.089250  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:33.089273  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:33.089280  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:33 GMT
	I1127 11:37:33.089286  165526 round_trippers.go:580]     Audit-Id: 5dd216c5-b81e-40f2-815a-5270d69ca1b0
	I1127 11:37:33.089291  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:33.089296  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:33.089300  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:33.089306  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:33.089444  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:33.089757  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:33.586047  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:33.586069  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:33.586077  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:33.586083  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:33.588439  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:33.588464  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:33.588475  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:33.588484  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:33.588493  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:33 GMT
	I1127 11:37:33.588501  165526 round_trippers.go:580]     Audit-Id: cdf32fc6-300c-469b-9ffc-e76e16ac078b
	I1127 11:37:33.588510  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:33.588520  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:33.588624  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:34.086098  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:34.086122  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:34.086130  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:34.086136  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:34.088421  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:34.088442  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:34.088449  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:34.088463  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:34 GMT
	I1127 11:37:34.088473  165526 round_trippers.go:580]     Audit-Id: 90f595fc-0aac-4289-b8d9-e5b2a1146544
	I1127 11:37:34.088482  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:34.088491  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:34.088506  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:34.088651  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:34.586183  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:34.586211  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:34.586223  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:34.586230  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:34.588547  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:34.588568  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:34.588576  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:34.588582  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:34.588590  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:34 GMT
	I1127 11:37:34.588598  165526 round_trippers.go:580]     Audit-Id: c4525f47-440b-4139-a624-a5650282851b
	I1127 11:37:34.588613  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:34.588622  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:34.588727  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:35.086973  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:35.087007  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:35.087015  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:35.087021  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:35.089364  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:35.089384  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:35.089391  165526 round_trippers.go:580]     Audit-Id: 9576430a-5943-4851-a979-446fd71dc2d4
	I1127 11:37:35.089397  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:35.089402  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:35.089410  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:35.089418  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:35.089428  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:35 GMT
	I1127 11:37:35.089619  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:35.089970  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:35.586175  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:35.586195  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:35.586204  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:35.586210  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:35.588438  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:35.588460  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:35.588472  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:35.588481  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:35 GMT
	I1127 11:37:35.588490  165526 round_trippers.go:580]     Audit-Id: 62be8fc4-a60a-4c65-8f03-b087bb842098
	I1127 11:37:35.588500  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:35.588509  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:35.588521  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:35.588627  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:36.086292  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:36.086319  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:36.086333  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:36.086339  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:36.088706  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:36.088730  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:36.088737  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:36.088743  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:36.088748  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:36 GMT
	I1127 11:37:36.088753  165526 round_trippers.go:580]     Audit-Id: 011188a8-7cbf-416f-bbbe-c3541b9b8456
	I1127 11:37:36.088758  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:36.088763  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:36.088950  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:36.586990  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:36.587012  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:36.587021  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:36.587027  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:36.589484  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:36.589511  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:36.589520  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:36.589529  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:36.589535  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:36 GMT
	I1127 11:37:36.589543  165526 round_trippers.go:580]     Audit-Id: 45758d6d-74be-41d5-9964-cd595aca6f0f
	I1127 11:37:36.589551  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:36.589559  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:36.589734  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:37.086328  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:37.086357  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:37.086368  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:37.086376  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:37.088810  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:37.088833  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:37.088840  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:37 GMT
	I1127 11:37:37.088846  165526 round_trippers.go:580]     Audit-Id: 438c45a6-4276-4078-b409-2532233867e6
	I1127 11:37:37.088852  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:37.088857  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:37.088861  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:37.088866  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:37.088982  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:37.586894  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:37.586922  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:37.586930  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:37.586937  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:37.589256  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:37.589276  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:37.589283  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:37.589289  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:37.589294  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:37 GMT
	I1127 11:37:37.589299  165526 round_trippers.go:580]     Audit-Id: 7213d794-2a59-4a83-ae32-b8532df25eb3
	I1127 11:37:37.589309  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:37.589314  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:37.589415  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:37.589708  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:38.086837  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:38.086861  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:38.086871  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:38.086879  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:38.089186  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:38.089206  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:38.089213  165526 round_trippers.go:580]     Audit-Id: 4ccb4a4d-1e98-4f31-b8ea-0feced82fd0a
	I1127 11:37:38.089219  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:38.089224  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:38.089230  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:38.089235  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:38.089240  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:38 GMT
	I1127 11:37:38.089410  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:38.586035  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:38.586063  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:38.586073  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:38.586081  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:38.588412  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:38.588435  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:38.588442  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:38.588447  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:38.588452  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:38.588460  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:38.588468  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:38 GMT
	I1127 11:37:38.588475  165526 round_trippers.go:580]     Audit-Id: f2a624f3-9ab8-432d-bed3-cf3827492095
	I1127 11:37:38.588621  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:39.086182  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:39.086210  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:39.086219  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:39.086225  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:39.088715  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:39.088736  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:39.088743  165526 round_trippers.go:580]     Audit-Id: 192ff79c-c39c-47ab-b964-6648c3728f3c
	I1127 11:37:39.088750  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:39.088759  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:39.088766  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:39.088773  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:39.088781  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:39 GMT
	I1127 11:37:39.088890  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:39.586483  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:39.586512  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:39.586520  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:39.586527  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:39.588673  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:39.588696  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:39.588705  165526 round_trippers.go:580]     Audit-Id: cb6cb496-5711-41fa-b81b-f833bb2725c4
	I1127 11:37:39.588713  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:39.588721  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:39.588729  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:39.588736  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:39.588745  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:39 GMT
	I1127 11:37:39.588860  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:40.086242  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:40.086266  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:40.086284  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:40.086292  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:40.088475  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:40.088503  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:40.088515  165526 round_trippers.go:580]     Audit-Id: 016ecaa2-f51e-453f-8480-27d01ca171fe
	I1127 11:37:40.088524  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:40.088533  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:40.088545  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:40.088561  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:40.088574  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:40 GMT
	I1127 11:37:40.088686  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:40.088978  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:40.586231  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:40.586251  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:40.586259  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:40.586264  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:40.588568  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:40.588586  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:40.588593  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:40.588601  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:40.588608  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:40.588618  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:40.588627  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:40 GMT
	I1127 11:37:40.588636  165526 round_trippers.go:580]     Audit-Id: b2b81ffa-8092-4e13-9fff-5094b7bcebaf
	I1127 11:37:40.588749  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:41.086103  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:41.086130  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:41.086141  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:41.086149  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:41.088501  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:41.088529  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:41.088540  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:41.088548  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:41.088558  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:41.088571  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:41 GMT
	I1127 11:37:41.088580  165526 round_trippers.go:580]     Audit-Id: 946cfc4b-5f47-459a-acb6-86bbde7e3005
	I1127 11:37:41.088588  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:41.088725  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:41.586281  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:41.586304  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:41.586313  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:41.586319  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:41.588523  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:41.588545  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:41.588553  165526 round_trippers.go:580]     Audit-Id: 481e57c7-e637-4013-9024-0a413e37aa30
	I1127 11:37:41.588558  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:41.588565  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:41.588573  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:41.588585  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:41.588599  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:41 GMT
	I1127 11:37:41.588709  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:42.086371  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:42.086399  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:42.086410  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:42.086419  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:42.088684  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:42.088707  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:42.088714  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:42.088721  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:42.088729  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:42.088741  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:42 GMT
	I1127 11:37:42.088749  165526 round_trippers.go:580]     Audit-Id: d9bc6a44-b3e5-426a-b5c5-e8aa8bc552d7
	I1127 11:37:42.088757  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:42.088893  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:42.089208  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:42.586813  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:42.586840  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:42.586852  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:42.586861  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:42.589087  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:42.589105  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:42.589112  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:42.589121  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:42 GMT
	I1127 11:37:42.589126  165526 round_trippers.go:580]     Audit-Id: 0e793ce6-c851-4ead-bdd4-d9e7691b5112
	I1127 11:37:42.589132  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:42.589137  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:42.589144  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:42.589293  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:43.086945  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:43.086971  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:43.086979  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:43.086985  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:43.089234  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:43.089257  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:43.089266  165526 round_trippers.go:580]     Audit-Id: b981758b-2aee-46a7-98ba-3814309c359d
	I1127 11:37:43.089274  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:43.089280  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:43.089287  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:43.089296  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:43.089304  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:43 GMT
	I1127 11:37:43.089444  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:43.586089  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:43.586114  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:43.586122  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:43.586128  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:43.588296  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:43.588319  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:43.588330  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:43.588339  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:43.588347  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:43.588354  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:43 GMT
	I1127 11:37:43.588363  165526 round_trippers.go:580]     Audit-Id: 3c85cb79-8ea8-4bdc-9d39-37fce23e2264
	I1127 11:37:43.588370  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:43.588478  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:44.086103  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:44.086130  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:44.086138  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:44.086145  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:44.088669  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:44.088708  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:44.088718  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:44 GMT
	I1127 11:37:44.088728  165526 round_trippers.go:580]     Audit-Id: 4f4729d4-cc95-430b-9219-932b00a4929a
	I1127 11:37:44.088736  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:44.088744  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:44.088755  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:44.088767  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:44.088877  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:44.586499  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:44.586528  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:44.586540  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:44.586548  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:44.588773  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:44.588794  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:44.588801  165526 round_trippers.go:580]     Audit-Id: aef8b23b-52c8-4eb4-b003-461800482465
	I1127 11:37:44.588807  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:44.588812  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:44.588817  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:44.588822  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:44.588827  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:44 GMT
	I1127 11:37:44.588961  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:44.589276  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:45.086695  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:45.086719  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:45.086732  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:45.086742  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:45.089077  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:45.089100  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:45.089108  165526 round_trippers.go:580]     Audit-Id: 6ee8ee24-4037-48db-a7dc-b8d64b2c29e8
	I1127 11:37:45.089113  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:45.089119  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:45.089124  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:45.089130  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:45.089137  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:45 GMT
	I1127 11:37:45.089268  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:45.586910  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:45.586934  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:45.586942  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:45.586950  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:45.589361  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:45.589385  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:45.589392  165526 round_trippers.go:580]     Audit-Id: 42e6677a-6e6d-48f6-a53c-9d997a0517ef
	I1127 11:37:45.589398  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:45.589403  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:45.589408  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:45.589413  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:45.589418  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:45 GMT
	I1127 11:37:45.589520  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:46.086971  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:46.086999  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:46.087010  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:46.087019  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:46.089465  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:46.089489  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:46.089498  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:46 GMT
	I1127 11:37:46.089505  165526 round_trippers.go:580]     Audit-Id: 5b9c04b1-b992-4f08-a95b-c8945136142a
	I1127 11:37:46.089515  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:46.089523  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:46.089531  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:46.089543  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:46.089638  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:46.586369  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:46.586391  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:46.586399  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:46.586405  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:46.588843  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:46.588864  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:46.588871  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:46.588877  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:46.588883  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:46.588893  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:46.588904  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:46 GMT
	I1127 11:37:46.588913  165526 round_trippers.go:580]     Audit-Id: 5f8558b7-959e-4d19-9d23-9863c62fe147
	I1127 11:37:46.589067  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:46.589379  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:47.086732  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:47.086755  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:47.086764  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:47.086770  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:47.089250  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:47.089272  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:47.089279  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:47 GMT
	I1127 11:37:47.089285  165526 round_trippers.go:580]     Audit-Id: 0f4bd369-fed5-450f-9606-270ab2b4d4e2
	I1127 11:37:47.089290  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:47.089295  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:47.089300  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:47.089306  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:47.089479  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:47.586148  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:47.586177  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:47.586186  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:47.586192  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:47.588644  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:47.588667  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:47.588677  165526 round_trippers.go:580]     Audit-Id: 10b87c7b-e055-48b1-b03a-1f5e15c9fb61
	I1127 11:37:47.588684  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:47.588691  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:47.588701  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:47.588709  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:47.588720  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:47 GMT
	I1127 11:37:47.588816  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:48.086399  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:48.086425  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:48.086433  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:48.086440  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:48.088755  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:48.088779  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:48.088789  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:48.088796  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:48 GMT
	I1127 11:37:48.088803  165526 round_trippers.go:580]     Audit-Id: a374de9e-b736-41bd-a8ca-7f7d783f28f7
	I1127 11:37:48.088810  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:48.088817  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:48.088825  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:48.089116  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:48.586574  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:48.586598  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:48.586607  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:48.586614  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:48.589063  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:48.589089  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:48.589098  165526 round_trippers.go:580]     Audit-Id: dccabe2a-ede3-43de-8ea4-858211655022
	I1127 11:37:48.589106  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:48.589114  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:48.589122  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:48.589134  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:48.589145  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:48 GMT
	I1127 11:37:48.589269  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:48.589631  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:49.086959  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:49.086981  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:49.086991  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:49.086998  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:49.089327  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:49.089363  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:49.089370  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:49.089377  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:49.089382  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:49 GMT
	I1127 11:37:49.089387  165526 round_trippers.go:580]     Audit-Id: c5337e10-1eaf-4720-91d6-e977e79bf04a
	I1127 11:37:49.089392  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:49.089398  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:49.089527  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:49.586149  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:49.586170  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:49.586179  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:49.586185  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:49.588631  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:49.588655  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:49.588662  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:49.588668  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:49.588673  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:49 GMT
	I1127 11:37:49.588678  165526 round_trippers.go:580]     Audit-Id: 77714bbf-78db-457f-bfec-a0dce6d71a89
	I1127 11:37:49.588683  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:49.588688  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:49.588823  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:50.086463  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:50.086491  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:50.086501  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:50.086510  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:50.088781  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:50.088800  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:50.088807  165526 round_trippers.go:580]     Audit-Id: df36a273-3fd6-48a3-bfeb-095607c8e79a
	I1127 11:37:50.088813  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:50.088819  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:50.088824  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:50.088836  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:50.088847  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:50 GMT
	I1127 11:37:50.088982  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:50.586629  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:50.586653  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:50.586661  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:50.586667  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:50.589048  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:50.589077  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:50.589084  165526 round_trippers.go:580]     Audit-Id: 7a5400fc-720f-484a-853a-58b6a2ef35a8
	I1127 11:37:50.589090  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:50.589095  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:50.589100  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:50.589105  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:50.589110  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:50 GMT
	I1127 11:37:50.589258  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:51.086944  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:51.086976  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:51.086985  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:51.086994  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:51.089242  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:51.089268  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:51.089277  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:51.089282  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:51.089288  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:51 GMT
	I1127 11:37:51.089293  165526 round_trippers.go:580]     Audit-Id: ccb56bef-e744-4a63-b2e9-24733efd5dc1
	I1127 11:37:51.089298  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:51.089306  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:51.089424  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:51.089727  165526 node_ready.go:58] node "multinode-780990-m02" has status "Ready":"False"
	I1127 11:37:51.585986  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:51.586009  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:51.586019  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:51.586028  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:51.588313  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:51.588333  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:51.588340  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:51 GMT
	I1127 11:37:51.588348  165526 round_trippers.go:580]     Audit-Id: 4444ccaf-4aaa-4830-9bda-c259ba5de5ed
	I1127 11:37:51.588356  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:51.588364  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:51.588372  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:51.588383  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:51.588488  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:52.086162  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:52.086188  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:52.086196  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:52.086202  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:52.088671  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:52.088696  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:52.088704  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:52.088710  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:52 GMT
	I1127 11:37:52.088716  165526 round_trippers.go:580]     Audit-Id: 54e2b264-8be1-4bdb-b50b-18f308027d80
	I1127 11:37:52.088721  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:52.088728  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:52.088733  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:52.088889  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:52.586787  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:52.586812  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:52.586820  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:52.586825  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:52.589129  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:52.589153  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:52.589170  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:52.589176  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:52.589184  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:52.589192  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:52 GMT
	I1127 11:37:52.589204  165526 round_trippers.go:580]     Audit-Id: b7d571f9-2ff0-40f8-98db-79f0f3250cc9
	I1127 11:37:52.589212  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:52.589344  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"475","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 11:37:53.087036  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:53.087064  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.087075  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.087084  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.089430  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:53.089452  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.089459  165526 round_trippers.go:580]     Audit-Id: 68c0a5ec-13d9-421a-bb6f-506e6a1ef0d5
	I1127 11:37:53.089465  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.089470  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.089475  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.089480  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.089485  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.089638  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"498","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1127 11:37:53.090009  165526 node_ready.go:49] node "multinode-780990-m02" has status "Ready":"True"
	I1127 11:37:53.090036  165526 node_ready.go:38] duration metric: took 31.009582412s waiting for node "multinode-780990-m02" to be "Ready" ...
	I1127 11:37:53.090049  165526 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:37:53.090123  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 11:37:53.090133  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.090141  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.090146  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.093503  165526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 11:37:53.093526  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.093535  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.093540  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.093546  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.093552  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.093557  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.093563  165526 round_trippers.go:580]     Audit-Id: be0d580f-bc5b-4ce3-a821-f1d2b37e1d6d
	I1127 11:37:53.094084  165526 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"410","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1127 11:37:53.096201  165526 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4jsq5" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.096278  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4jsq5
	I1127 11:37:53.096286  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.096295  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.096303  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.098220  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.098239  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.098249  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.098256  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.098266  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.098275  165526 round_trippers.go:580]     Audit-Id: 8d0c1dca-f245-4bd3-a57f-1714d713069b
	I1127 11:37:53.098288  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.098298  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.098421  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4jsq5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c4d42d52-2ac2-435b-a219-96b0b3934f2d","resourceVersion":"410","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"372276c5-2c58-4ce2-8fb2-7a04d78d7e05","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"372276c5-2c58-4ce2-8fb2-7a04d78d7e05\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1127 11:37:53.098866  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:53.098882  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.098905  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.098914  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.100699  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.100715  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.100722  165526 round_trippers.go:580]     Audit-Id: c65aa4db-7cf5-4cfe-9053-5acb246f97d8
	I1127 11:37:53.100727  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.100732  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.100737  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.100742  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.100747  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.100933  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 11:37:53.101344  165526 pod_ready.go:92] pod "coredns-5dd5756b68-4jsq5" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:53.101365  165526 pod_ready.go:81] duration metric: took 5.141322ms waiting for pod "coredns-5dd5756b68-4jsq5" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.101377  165526 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.101442  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-780990
	I1127 11:37:53.101527  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.101548  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.101562  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.103630  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:53.103650  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.103660  165526 round_trippers.go:580]     Audit-Id: 1fc777d3-07a3-48c3-a5ab-8af09f3879c7
	I1127 11:37:53.103678  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.103690  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.103698  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.103708  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.103716  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.103804  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-780990","namespace":"kube-system","uid":"1502b7c7-223d-4753-8417-bcfa91c25b37","resourceVersion":"282","creationTimestamp":"2023-11-27T11:36:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"46e54cccbfa94a04c0955770423d5f05","kubernetes.io/config.mirror":"46e54cccbfa94a04c0955770423d5f05","kubernetes.io/config.seen":"2023-11-27T11:36:22.976528163Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1127 11:37:53.104174  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:53.104186  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.104193  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.104202  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.105888  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.105907  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.105916  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.105924  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.105931  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.105940  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.105954  165526 round_trippers.go:580]     Audit-Id: 74a51140-d379-4855-a5ae-d7a9ceacbe59
	I1127 11:37:53.105982  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.106114  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 11:37:53.106411  165526 pod_ready.go:92] pod "etcd-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:53.106426  165526 pod_ready.go:81] duration metric: took 5.04247ms waiting for pod "etcd-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.106443  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.106494  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-780990
	I1127 11:37:53.106502  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.106508  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.106514  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.108472  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.108489  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.108501  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.108510  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.108519  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.108527  165526 round_trippers.go:580]     Audit-Id: 069e65b8-e1bd-415c-a26e-6adb824b9634
	I1127 11:37:53.108535  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.108544  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.108672  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-780990","namespace":"kube-system","uid":"cbd45760-c484-4cb2-836c-2f14805b67dd","resourceVersion":"284","creationTimestamp":"2023-11-27T11:36:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"16e69e88ab42c0e4f329585035cb732a","kubernetes.io/config.mirror":"16e69e88ab42c0e4f329585035cb732a","kubernetes.io/config.seen":"2023-11-27T11:36:22.976529906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1127 11:37:53.109202  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:53.109220  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.109229  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.109236  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.110997  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.111017  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.111028  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.111041  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.111053  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.111061  165526 round_trippers.go:580]     Audit-Id: 04cdfa40-5b14-439a-a1b9-282b5ed523a0
	I1127 11:37:53.111071  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.111084  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.111195  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 11:37:53.111573  165526 pod_ready.go:92] pod "kube-apiserver-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:53.111591  165526 pod_ready.go:81] duration metric: took 5.134467ms waiting for pod "kube-apiserver-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.111605  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.111676  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-780990
	I1127 11:37:53.111687  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.111698  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.111708  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.113447  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.113467  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.113476  165526 round_trippers.go:580]     Audit-Id: c9efc442-07c1-4538-b4c5-391df53adc77
	I1127 11:37:53.113484  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.113491  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.113502  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.113512  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.113523  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.113662  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-780990","namespace":"kube-system","uid":"f967b509-0a82-4a6d-badd-530f1c9d9761","resourceVersion":"281","creationTimestamp":"2023-11-27T11:36:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"14104bb059abd8436c9b45a2913e2f31","kubernetes.io/config.mirror":"14104bb059abd8436c9b45a2913e2f31","kubernetes.io/config.seen":"2023-11-27T11:36:16.715533663Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1127 11:37:53.114073  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:53.114087  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.114094  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.114101  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.115820  165526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 11:37:53.115843  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.115850  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.115858  165526 round_trippers.go:580]     Audit-Id: be2839bb-5661-4ca5-8c82-f43045144c85
	I1127 11:37:53.115880  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.115893  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.115903  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.115909  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.116058  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 11:37:53.116479  165526 pod_ready.go:92] pod "kube-controller-manager-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:53.116503  165526 pod_ready.go:81] duration metric: took 4.889636ms waiting for pod "kube-controller-manager-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.116518  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6lbv6" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.287936  165526 request.go:629] Waited for 171.335508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lbv6
	I1127 11:37:53.288001  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lbv6
	I1127 11:37:53.288005  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.288014  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.288021  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.290357  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:53.290397  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.290406  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.290415  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.290421  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.290430  165526 round_trippers.go:580]     Audit-Id: 2668173b-4269-42d3-8602-ec3f7dfdb7c8
	I1127 11:37:53.290438  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.290447  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.290634  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6lbv6","generateName":"kube-proxy-","namespace":"kube-system","uid":"3796fc28-e907-4af3-91f9-7aa0cb2bff44","resourceVersion":"370","creationTimestamp":"2023-11-27T11:36:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7161d318-270a-4bd9-be73-21d7f5329814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7161d318-270a-4bd9-be73-21d7f5329814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1127 11:37:53.487484  165526 request.go:629] Waited for 196.386711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:53.487570  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:53.487583  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.487597  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.487625  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.489974  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:53.489993  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.490002  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.490008  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.490015  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.490024  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.490034  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.490043  165526 round_trippers.go:580]     Audit-Id: 215f6630-7b2e-4a76-b20a-81704100d781
	I1127 11:37:53.490172  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 11:37:53.490504  165526 pod_ready.go:92] pod "kube-proxy-6lbv6" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:53.490524  165526 pod_ready.go:81] duration metric: took 373.994607ms waiting for pod "kube-proxy-6lbv6" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.490538  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ddlgz" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.687949  165526 request.go:629] Waited for 197.33814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ddlgz
	I1127 11:37:53.688010  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ddlgz
	I1127 11:37:53.688015  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.688027  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.688039  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.690475  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:53.690494  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.690500  165526 round_trippers.go:580]     Audit-Id: fc07871d-ac1b-49ba-a591-2c4be3e1f6f5
	I1127 11:37:53.690506  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.690511  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.690517  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.690525  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.690533  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.690698  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ddlgz","generateName":"kube-proxy-","namespace":"kube-system","uid":"27c2d203-c753-4fc6-a87a-116df9e6e665","resourceVersion":"462","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7161d318-270a-4bd9-be73-21d7f5329814","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7161d318-270a-4bd9-be73-21d7f5329814\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1127 11:37:53.887487  165526 request.go:629] Waited for 196.357188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:53.887551  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990-m02
	I1127 11:37:53.887556  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:53.887563  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:53.887576  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:53.889887  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:53.889906  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:53.889915  165526 round_trippers.go:580]     Audit-Id: ee2aa8fd-753c-45be-bce0-973d8b365f4f
	I1127 11:37:53.889923  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:53.889930  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:53.889937  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:53.889947  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:53.889956  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:53 GMT
	I1127 11:37:53.890083  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990-m02","uid":"506b27f4-7cc2-417c-b775-e3e9796145cb","resourceVersion":"498","creationTimestamp":"2023-11-27T11:37:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:37:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1127 11:37:53.890411  165526 pod_ready.go:92] pod "kube-proxy-ddlgz" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:53.890427  165526 pod_ready.go:81] duration metric: took 399.874896ms waiting for pod "kube-proxy-ddlgz" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:53.890437  165526 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:54.087900  165526 request.go:629] Waited for 197.384413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-780990
	I1127 11:37:54.087976  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-780990
	I1127 11:37:54.087981  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:54.087991  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:54.087997  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:54.090352  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:54.090380  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:54.090390  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:54.090398  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:54.090406  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:54.090414  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:54.090422  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:54 GMT
	I1127 11:37:54.090431  165526 round_trippers.go:580]     Audit-Id: 6c8db8cf-d7ff-4716-879e-0981856d1e13
	I1127 11:37:54.090539  165526 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-780990","namespace":"kube-system","uid":"a7b93896-e1d5-432e-8823-0015d815cd78","resourceVersion":"306","creationTimestamp":"2023-11-27T11:36:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d167e69bbb0a06d8435e369b8f69acdb","kubernetes.io/config.mirror":"d167e69bbb0a06d8435e369b8f69acdb","kubernetes.io/config.seen":"2023-11-27T11:36:22.976521732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T11:36:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1127 11:37:54.287347  165526 request.go:629] Waited for 196.348476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:54.287428  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-780990
	I1127 11:37:54.287437  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:54.287450  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:54.287468  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:54.289736  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:54.289762  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:54.289772  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:54.289778  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:54.289784  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:54 GMT
	I1127 11:37:54.289789  165526 round_trippers.go:580]     Audit-Id: 377ed82a-6218-4bba-9f5c-67604d08cf03
	I1127 11:37:54.289797  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:54.289805  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:54.289973  165526 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T11:36:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 11:37:54.290303  165526 pod_ready.go:92] pod "kube-scheduler-multinode-780990" in "kube-system" namespace has status "Ready":"True"
	I1127 11:37:54.290326  165526 pod_ready.go:81] duration metric: took 399.880566ms waiting for pod "kube-scheduler-multinode-780990" in "kube-system" namespace to be "Ready" ...
	I1127 11:37:54.290338  165526 pod_ready.go:38] duration metric: took 1.200278694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:37:54.290353  165526 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:37:54.290397  165526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:37:54.301483  165526 system_svc.go:56] duration metric: took 11.119288ms WaitForService to wait for kubelet.
	I1127 11:37:54.301516  165526 kubeadm.go:581] duration metric: took 32.237071374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:37:54.301545  165526 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:37:54.487976  165526 request.go:629] Waited for 186.346982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1127 11:37:54.488054  165526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1127 11:37:54.488066  165526 round_trippers.go:469] Request Headers:
	I1127 11:37:54.488077  165526 round_trippers.go:473]     Accept: application/json, */*
	I1127 11:37:54.488088  165526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 11:37:54.490328  165526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 11:37:54.490347  165526 round_trippers.go:577] Response Headers:
	I1127 11:37:54.490354  165526 round_trippers.go:580]     Audit-Id: feb4b040-9a54-4379-9aa7-d3122f8a20be
	I1127 11:37:54.490359  165526 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 11:37:54.490364  165526 round_trippers.go:580]     Content-Type: application/json
	I1127 11:37:54.490370  165526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44c196b1-7d2e-41de-a223-36d576400628
	I1127 11:37:54.490375  165526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4633d000-473b-459f-9534-84e1b29eb43a
	I1127 11:37:54.490381  165526 round_trippers.go:580]     Date: Mon, 27 Nov 2023 11:37:54 GMT
	I1127 11:37:54.490565  165526 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"multinode-780990","uid":"dfbbfcc4-6405-4ecb-bf8f-323d33cb7828","resourceVersion":"416","creationTimestamp":"2023-11-27T11:36:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-780990","kubernetes.io/os":"linux","minikube.k8s.io/commit":"81390b5609e7feb2151fde4633273d04eb05a21f","minikube.k8s.io/name":"multinode-780990","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T11_36_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I1127 11:37:54.491250  165526 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 11:37:54.491274  165526 node_conditions.go:123] node cpu capacity is 8
	I1127 11:37:54.491286  165526 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 11:37:54.491289  165526 node_conditions.go:123] node cpu capacity is 8
	I1127 11:37:54.491293  165526 node_conditions.go:105] duration metric: took 189.739458ms to run NodePressure ...
	I1127 11:37:54.491306  165526 start.go:228] waiting for startup goroutines ...
	I1127 11:37:54.491340  165526 start.go:242] writing updated cluster config ...
	I1127 11:37:54.491651  165526 ssh_runner.go:195] Run: rm -f paused
	I1127 11:37:54.540206  165526 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 11:37:54.543181  165526 out.go:177] * Done! kubectl is now configured to use "multinode-780990" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 11:37:07 multinode-780990 crio[948]: time="2023-11-27 11:37:07.670753879Z" level=info msg="Starting container: 0395e25d0e73ed19aaa78b9fa806f147576b64eb8be2a70a6a8c8318e4666441" id=dbe0cd48-a4a8-4952-82ec-305fb8f3101b name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 11:37:07 multinode-780990 crio[948]: time="2023-11-27 11:37:07.670766508Z" level=info msg="Created container 59ddd0be619e2815bdcefc789185e2ed16f50a47a58a62dd1be39177aa3d0c47: kube-system/coredns-5dd5756b68-4jsq5/coredns" id=8d0c9301-6f35-4c8b-8905-4f51c079bbc1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 11:37:07 multinode-780990 crio[948]: time="2023-11-27 11:37:07.671181578Z" level=info msg="Starting container: 59ddd0be619e2815bdcefc789185e2ed16f50a47a58a62dd1be39177aa3d0c47" id=a3994c6d-6634-4b54-b72a-33e6b3628d1e name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 11:37:07 multinode-780990 crio[948]: time="2023-11-27 11:37:07.680751501Z" level=info msg="Started container" PID=2354 containerID=0395e25d0e73ed19aaa78b9fa806f147576b64eb8be2a70a6a8c8318e4666441 description=kube-system/storage-provisioner/storage-provisioner id=dbe0cd48-a4a8-4952-82ec-305fb8f3101b name=/runtime.v1.RuntimeService/StartContainer sandboxID=69f8d6719098ba4d278f0efdd4fdaad79d6f7fe167fac693ed2e7f8e6830f582
	Nov 27 11:37:07 multinode-780990 crio[948]: time="2023-11-27 11:37:07.682147501Z" level=info msg="Started container" PID=2355 containerID=59ddd0be619e2815bdcefc789185e2ed16f50a47a58a62dd1be39177aa3d0c47 description=kube-system/coredns-5dd5756b68-4jsq5/coredns id=a3994c6d-6634-4b54-b72a-33e6b3628d1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3d9b120b17caea7eaca4f017f33b307693fae311f3da106a49777f5e5bb67bc
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.551157595Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-wslrr/POD" id=4dfb48e4-3ffe-4706-bf9b-3d37003dcc47 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.551227516Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.564160297Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-wslrr Namespace:default ID:336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3 UID:81becd92-ba00-4593-8b4c-b3fb4f83d67f NetNS:/var/run/netns/fc317f4a-fa3f-4121-bce0-1ed8f05df903 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.564194152Z" level=info msg="Adding pod default_busybox-5bc68d56bd-wslrr to CNI network \"kindnet\" (type=ptp)"
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.576857488Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-wslrr Namespace:default ID:336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3 UID:81becd92-ba00-4593-8b4c-b3fb4f83d67f NetNS:/var/run/netns/fc317f4a-fa3f-4121-bce0-1ed8f05df903 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.576979780Z" level=info msg="Checking pod default_busybox-5bc68d56bd-wslrr for CNI network kindnet (type=ptp)"
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.598824288Z" level=info msg="Ran pod sandbox 336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3 with infra container: default/busybox-5bc68d56bd-wslrr/POD" id=4dfb48e4-3ffe-4706-bf9b-3d37003dcc47 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.599942646Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=5869b44f-7ede-4382-9fce-4bb4364fc4da name=/runtime.v1.ImageService/ImageStatus
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.600192237Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=5869b44f-7ede-4382-9fce-4bb4364fc4da name=/runtime.v1.ImageService/ImageStatus
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.600931437Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=d1ff02d5-e2c8-46a8-9ba9-20ef380f74b3 name=/runtime.v1.ImageService/PullImage
	Nov 27 11:37:55 multinode-780990 crio[948]: time="2023-11-27 11:37:55.604572832Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.089138923Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.919063520Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=d1ff02d5-e2c8-46a8-9ba9-20ef380f74b3 name=/runtime.v1.ImageService/PullImage
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.920131060Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d5ad5b3d-6f23-4112-96aa-692b6e3633e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.921287758Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d5ad5b3d-6f23-4112-96aa-692b6e3633e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.922097312Z" level=info msg="Creating container: default/busybox-5bc68d56bd-wslrr/busybox" id=2be2b1ea-2d19-4f75-bec1-bde585ad7742 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.922175088Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.989166480Z" level=info msg="Created container 5ab0091bdfa9758b8973df34d6befb4a8adb697e8482e1372095652ebab57b2c: default/busybox-5bc68d56bd-wslrr/busybox" id=2be2b1ea-2d19-4f75-bec1-bde585ad7742 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 11:37:56 multinode-780990 crio[948]: time="2023-11-27 11:37:56.989907887Z" level=info msg="Starting container: 5ab0091bdfa9758b8973df34d6befb4a8adb697e8482e1372095652ebab57b2c" id=38e79024-3a9d-4da1-ab66-0a0f3bcd33e5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 11:37:57 multinode-780990 crio[948]: time="2023-11-27 11:37:57.000234130Z" level=info msg="Started container" PID=2534 containerID=5ab0091bdfa9758b8973df34d6befb4a8adb697e8482e1372095652ebab57b2c description=default/busybox-5bc68d56bd-wslrr/busybox id=38e79024-3a9d-4da1-ab66-0a0f3bcd33e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5ab0091bdfa97       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   336a5827ffdb2       busybox-5bc68d56bd-wslrr
	59ddd0be619e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      54 seconds ago       Running             coredns                   0                   a3d9b120b17ca       coredns-5dd5756b68-4jsq5
	0395e25d0e73e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      54 seconds ago       Running             storage-provisioner       0                   69f8d6719098b       storage-provisioner
	3638d9d5a5bd6       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   2ede2ffeb5541       kube-proxy-6lbv6
	5ac2be425958b       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   1f88705abb700       kindnet-vlzt4
	ba8df8b2b9a5d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   52fdd311ca7ee       kube-controller-manager-multinode-780990
	231eee921900a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   6219378da7a36       etcd-multinode-780990
	fdcc38c739571       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   4917e60c80477       kube-apiserver-multinode-780990
	5b60ede97e829       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   0fa771a9bd652       kube-scheduler-multinode-780990
	
	* 
	* ==> coredns [59ddd0be619e2815bdcefc789185e2ed16f50a47a58a62dd1be39177aa3d0c47] <==
	* [INFO] 10.244.1.2:50985 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000613767s
	[INFO] 10.244.0.3:49254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110183s
	[INFO] 10.244.0.3:55344 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002156755s
	[INFO] 10.244.0.3:36760 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104631s
	[INFO] 10.244.0.3:34885 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070776s
	[INFO] 10.244.0.3:51434 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012985s
	[INFO] 10.244.0.3:40001 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004689s
	[INFO] 10.244.0.3:50238 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075676s
	[INFO] 10.244.0.3:48126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045026s
	[INFO] 10.244.1.2:40435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128542s
	[INFO] 10.244.1.2:53287 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010359s
	[INFO] 10.244.1.2:51004 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071936s
	[INFO] 10.244.1.2:53123 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057601s
	[INFO] 10.244.0.3:55786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120582s
	[INFO] 10.244.0.3:44711 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073963s
	[INFO] 10.244.0.3:33558 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075057s
	[INFO] 10.244.0.3:40477 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066938s
	[INFO] 10.244.1.2:52931 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129056s
	[INFO] 10.244.1.2:56827 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160332s
	[INFO] 10.244.1.2:36615 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084658s
	[INFO] 10.244.1.2:40103 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129538s
	[INFO] 10.244.0.3:41154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124646s
	[INFO] 10.244.0.3:51240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086014s
	[INFO] 10.244.0.3:58229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000050159s
	[INFO] 10.244.0.3:55520 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000052721s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-780990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-780990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f
	                    minikube.k8s.io/name=multinode-780990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T11_36_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 11:36:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-780990
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 11:37:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 11:37:07 +0000   Mon, 27 Nov 2023 11:36:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 11:37:07 +0000   Mon, 27 Nov 2023 11:36:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 11:37:07 +0000   Mon, 27 Nov 2023 11:36:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 11:37:07 +0000   Mon, 27 Nov 2023 11:37:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-780990
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f835d9077dc4795b01bf98bce0623e8
	  System UUID:                ce02ff8e-a70d-4572-bccb-ec13b448dd81
	  Boot ID:                    70e275d9-e289-4a40-9f12-718983944527
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wslrr                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-4jsq5                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     86s
	  kube-system                 etcd-multinode-780990                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-vlzt4                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      86s
	  kube-system                 kube-apiserver-multinode-780990             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-multinode-780990    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-6lbv6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-multinode-780990             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 84s   kube-proxy       
	  Normal  Starting                 99s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s   kubelet          Node multinode-780990 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s   kubelet          Node multinode-780990 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s   kubelet          Node multinode-780990 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           86s   node-controller  Node multinode-780990 event: Registered Node multinode-780990 in Controller
	  Normal  NodeReady                54s   kubelet          Node multinode-780990 status is now: NodeReady
	
	
	Name:               multinode-780990-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-780990-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 11:37:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-780990-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 11:37:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 11:37:52 +0000   Mon, 27 Nov 2023 11:37:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 11:37:52 +0000   Mon, 27 Nov 2023 11:37:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 11:37:52 +0000   Mon, 27 Nov 2023 11:37:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 11:37:52 +0000   Mon, 27 Nov 2023 11:37:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-780990-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 52990897e80e4b26b3fd337d09bf308f
	  System UUID:                45d8a360-fc6f-40b2-b419-3708afcd38cf
	  Boot ID:                    70e275d9-e289-4a40-9f12-718983944527
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fxkgq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-6pbx8               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-proxy-ddlgz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x5 over 42s)  kubelet          Node multinode-780990-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x5 over 42s)  kubelet          Node multinode-780990-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x5 over 42s)  kubelet          Node multinode-780990-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node multinode-780990-m02 event: Registered Node multinode-780990-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-780990-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004916] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006603] FS-Cache: N-cookie d=00000000bad6431e{9p.inode} n=00000000519b9590
	[  +0.008720] FS-Cache: N-key=[8] '4aa20f0200000000'
	[  +0.301934] FS-Cache: Duplicate cookie detected
	[  +0.004681] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006745] FS-Cache: O-cookie d=00000000bad6431e{9p.inode} n=0000000001f430cd
	[  +0.007366] FS-Cache: O-key=[8] '52a20f0200000000'
	[  +0.004934] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006587] FS-Cache: N-cookie d=00000000bad6431e{9p.inode} n=00000000245eaa82
	[  +0.007353] FS-Cache: N-key=[8] '52a20f0200000000'
	[ +22.917859] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov27 11:28] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +1.035585] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +2.011761] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +4.255612] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[  +8.191137] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[Nov27 11:29] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	[ +32.252667] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: aa de 03 b2 cf 96 12 12 f1 37 75 e0 08 00
	
	* 
	* ==> etcd [231eee921900ac26bda372962e0ac532c7214eea43cde6069213d611f677937e] <==
	* {"level":"info","ts":"2023-11-27T11:36:17.542346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-27T11:36:17.542563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-27T11:36:17.543208Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-27T11:36:17.543309Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-27T11:36:17.543364Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-27T11:36:17.543525Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-27T11:36:17.543599Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-27T11:36:18.369351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-27T11:36:18.369394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-27T11:36:18.369408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-27T11:36:18.369419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-27T11:36:18.369425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-27T11:36:18.369433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-27T11:36:18.369448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-27T11:36:18.370341Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T11:36:18.371245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T11:36:18.371241Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-780990 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-27T11:36:18.371279Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T11:36:18.371648Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T11:36:18.371689Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-27T11:36:18.371841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-27T11:36:18.371792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T11:36:18.371937Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T11:36:18.372541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-27T11:36:18.372674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:38:01 up  2:20,  0 users,  load average: 0.57, 0.98, 1.40
	Linux multinode-780990 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5ac2be425958b9a8a365c033a9f5dbb51f29f660a821d43fdc4d2e6941d2f1c7] <==
	* I1127 11:36:36.748278       1 main.go:116] setting mtu 1500 for CNI 
	I1127 11:36:36.748305       1 main.go:146] kindnetd IP family: "ipv4"
	I1127 11:36:36.748327       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1127 11:37:07.071916       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1127 11:37:07.144638       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 11:37:07.144678       1 main.go:227] handling current node
	I1127 11:37:17.159928       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 11:37:17.159960       1 main.go:227] handling current node
	I1127 11:37:27.172622       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 11:37:27.172650       1 main.go:227] handling current node
	I1127 11:37:27.172660       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 11:37:27.172665       1 main.go:250] Node multinode-780990-m02 has CIDR [10.244.1.0/24] 
	I1127 11:37:27.172807       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1127 11:37:37.176642       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 11:37:37.176668       1 main.go:227] handling current node
	I1127 11:37:37.176677       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 11:37:37.176681       1 main.go:250] Node multinode-780990-m02 has CIDR [10.244.1.0/24] 
	I1127 11:37:47.185865       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 11:37:47.185890       1 main.go:227] handling current node
	I1127 11:37:47.185902       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 11:37:47.185906       1 main.go:250] Node multinode-780990-m02 has CIDR [10.244.1.0/24] 
	I1127 11:37:57.198782       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 11:37:57.198809       1 main.go:227] handling current node
	I1127 11:37:57.198818       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 11:37:57.198823       1 main.go:250] Node multinode-780990-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [fdcc38c739571b7ae643c95e1e652c13422255a85317650d806ff18a023a80db] <==
	* I1127 11:36:19.940449       1 cache.go:39] Caches are synced for autoregister controller
	I1127 11:36:19.939841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 11:36:19.940978       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1127 11:36:19.941032       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1127 11:36:19.941114       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1127 11:36:19.946142       1 controller.go:624] quota admission added evaluator for: namespaces
	I1127 11:36:19.950408       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1127 11:36:19.954570       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1127 11:36:20.055785       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1127 11:36:20.157855       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 11:36:20.810365       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1127 11:36:20.813807       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1127 11:36:20.813822       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1127 11:36:21.204978       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 11:36:21.236958       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1127 11:36:21.348665       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1127 11:36:21.354175       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1127 11:36:21.355117       1 controller.go:624] quota admission added evaluator for: endpoints
	I1127 11:36:21.359072       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 11:36:21.957975       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1127 11:36:22.920449       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1127 11:36:22.930495       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1127 11:36:22.938775       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1127 11:36:35.595011       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1127 11:36:35.643275       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [ba8df8b2b9a5d031609c82961b809f9164ac20843f2f88dd405165513a9478a1] <==
	* I1127 11:37:08.156985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.172µs"
	I1127 11:37:08.248503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.175574ms"
	I1127 11:37:08.248598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.027µs"
	I1127 11:37:10.556626       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1127 11:37:10.556680       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1127 11:37:10.556692       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-4jsq5" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-4jsq5"
	I1127 11:37:21.405060       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-780990-m02\" does not exist"
	I1127 11:37:21.410599       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-780990-m02" podCIDRs=["10.244.1.0/24"]
	I1127 11:37:21.414063       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ddlgz"
	I1127 11:37:21.416008       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6pbx8"
	I1127 11:37:25.557705       1 event.go:307] "Event occurred" object="multinode-780990-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-780990-m02 event: Registered Node multinode-780990-m02 in Controller"
	I1127 11:37:25.557726       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-780990-m02"
	I1127 11:37:52.811783       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-780990-m02"
	I1127 11:37:55.231084       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1127 11:37:55.237883       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fxkgq"
	I1127 11:37:55.242422       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wslrr"
	I1127 11:37:55.249196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.329681ms"
	I1127 11:37:55.257319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.067608ms"
	I1127 11:37:55.257410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.697µs"
	I1127 11:37:55.267381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.737µs"
	I1127 11:37:55.571945       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-fxkgq" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-fxkgq"
	I1127 11:37:57.254178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.146468ms"
	I1127 11:37:57.254271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.15µs"
	I1127 11:37:58.017734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.484779ms"
	I1127 11:37:58.017844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.891µs"
	
	* 
	* ==> kube-proxy [3638d9d5a5bd61f1d5ef0097a7fa35ad3ce89970213890f7a27685fad71557e1] <==
	* I1127 11:36:36.848256       1 server_others.go:69] "Using iptables proxy"
	I1127 11:36:36.858111       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1127 11:36:36.957406       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 11:36:36.959798       1 server_others.go:152] "Using iptables Proxier"
	I1127 11:36:36.959858       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 11:36:36.959870       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 11:36:36.959908       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 11:36:36.960218       1 server.go:846] "Version info" version="v1.28.4"
	I1127 11:36:36.960237       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 11:36:36.960913       1 config.go:188] "Starting service config controller"
	I1127 11:36:36.960974       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 11:36:36.960919       1 config.go:97] "Starting endpoint slice config controller"
	I1127 11:36:36.961005       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 11:36:36.960946       1 config.go:315] "Starting node config controller"
	I1127 11:36:36.961019       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 11:36:37.061701       1 shared_informer.go:318] Caches are synced for node config
	I1127 11:36:37.061701       1 shared_informer.go:318] Caches are synced for service config
	I1127 11:36:37.061741       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5b60ede97e8293affd1eee65689b02ccf155c3358a58752d60be815dcac73598] <==
	* W1127 11:36:20.045466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 11:36:20.045542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 11:36:20.045658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 11:36:20.045702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1127 11:36:20.045822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 11:36:20.045864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 11:36:20.046133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:36:20.046193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 11:36:20.046311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:36:20.046355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 11:36:20.046463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:36:20.046597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 11:36:20.051941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:36:20.052548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 11:36:20.888926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:36:20.888965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 11:36:20.939459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:36:20.939499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 11:36:20.996422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 11:36:20.996461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1127 11:36:21.035952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 11:36:21.035991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 11:36:21.058402       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:36:21.058441       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1127 11:36:21.360847       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 11:36:35 multinode-780990 kubelet[1586]: I1127 11:36:35.863411    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c758b029-c7c6-4cbb-be6a-d1f9a3a52e24-xtables-lock\") pod \"kindnet-vlzt4\" (UID: \"c758b029-c7c6-4cbb-be6a-d1f9a3a52e24\") " pod="kube-system/kindnet-vlzt4"
	Nov 27 11:36:35 multinode-780990 kubelet[1586]: I1127 11:36:35.863488    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3796fc28-e907-4af3-91f9-7aa0cb2bff44-lib-modules\") pod \"kube-proxy-6lbv6\" (UID: \"3796fc28-e907-4af3-91f9-7aa0cb2bff44\") " pod="kube-system/kube-proxy-6lbv6"
	Nov 27 11:36:35 multinode-780990 kubelet[1586]: I1127 11:36:35.863524    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdr8t\" (UniqueName: \"kubernetes.io/projected/c758b029-c7c6-4cbb-be6a-d1f9a3a52e24-kube-api-access-qdr8t\") pod \"kindnet-vlzt4\" (UID: \"c758b029-c7c6-4cbb-be6a-d1f9a3a52e24\") " pod="kube-system/kindnet-vlzt4"
	Nov 27 11:36:35 multinode-780990 kubelet[1586]: I1127 11:36:35.863638    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c758b029-c7c6-4cbb-be6a-d1f9a3a52e24-lib-modules\") pod \"kindnet-vlzt4\" (UID: \"c758b029-c7c6-4cbb-be6a-d1f9a3a52e24\") " pod="kube-system/kindnet-vlzt4"
	Nov 27 11:36:35 multinode-780990 kubelet[1586]: I1127 11:36:35.863729    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3796fc28-e907-4af3-91f9-7aa0cb2bff44-kube-proxy\") pod \"kube-proxy-6lbv6\" (UID: \"3796fc28-e907-4af3-91f9-7aa0cb2bff44\") " pod="kube-system/kube-proxy-6lbv6"
	Nov 27 11:36:35 multinode-780990 kubelet[1586]: I1127 11:36:35.863765    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68hz\" (UniqueName: \"kubernetes.io/projected/3796fc28-e907-4af3-91f9-7aa0cb2bff44-kube-api-access-p68hz\") pod \"kube-proxy-6lbv6\" (UID: \"3796fc28-e907-4af3-91f9-7aa0cb2bff44\") " pod="kube-system/kube-proxy-6lbv6"
	Nov 27 11:36:36 multinode-780990 kubelet[1586]: W1127 11:36:36.340855    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio-2ede2ffeb55417aa08b69df9ef54d42f55b4505f7d17b2610c75de31aacc9e87 WatchSource:0}: Error finding container 2ede2ffeb55417aa08b69df9ef54d42f55b4505f7d17b2610c75de31aacc9e87: Status 404 returned error can't find the container with id 2ede2ffeb55417aa08b69df9ef54d42f55b4505f7d17b2610c75de31aacc9e87
	Nov 27 11:36:36 multinode-780990 kubelet[1586]: W1127 11:36:36.341186    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio-1f88705abb7005f259fbbe2a1490a4feaa55706f7d6ddd3144c5a5cc13a23f57 WatchSource:0}: Error finding container 1f88705abb7005f259fbbe2a1490a4feaa55706f7d6ddd3144c5a5cc13a23f57: Status 404 returned error can't find the container with id 1f88705abb7005f259fbbe2a1490a4feaa55706f7d6ddd3144c5a5cc13a23f57
	Nov 27 11:36:37 multinode-780990 kubelet[1586]: I1127 11:36:37.107776    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6lbv6" podStartSLOduration=2.10772402 podCreationTimestamp="2023-11-27 11:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 11:36:37.107607519 +0000 UTC m=+14.213106553" watchObservedRunningTime="2023-11-27 11:36:37.10772402 +0000 UTC m=+14.213223050"
	Nov 27 11:36:37 multinode-780990 kubelet[1586]: I1127 11:36:37.107904    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vlzt4" podStartSLOduration=2.107877049 podCreationTimestamp="2023-11-27 11:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 11:36:37.097945587 +0000 UTC m=+14.203444617" watchObservedRunningTime="2023-11-27 11:36:37.107877049 +0000 UTC m=+14.213376080"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.236361    1586 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.257854    1586 topology_manager.go:215] "Topology Admit Handler" podUID="1855f20f-5a70-4e9a-b202-bdc0f046497c" podNamespace="kube-system" podName="storage-provisioner"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.259229    1586 topology_manager.go:215] "Topology Admit Handler" podUID="c4d42d52-2ac2-435b-a219-96b0b3934f2d" podNamespace="kube-system" podName="coredns-5dd5756b68-4jsq5"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.392573    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4d42d52-2ac2-435b-a219-96b0b3934f2d-config-volume\") pod \"coredns-5dd5756b68-4jsq5\" (UID: \"c4d42d52-2ac2-435b-a219-96b0b3934f2d\") " pod="kube-system/coredns-5dd5756b68-4jsq5"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.392644    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1855f20f-5a70-4e9a-b202-bdc0f046497c-tmp\") pod \"storage-provisioner\" (UID: \"1855f20f-5a70-4e9a-b202-bdc0f046497c\") " pod="kube-system/storage-provisioner"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.392737    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg98t\" (UniqueName: \"kubernetes.io/projected/1855f20f-5a70-4e9a-b202-bdc0f046497c-kube-api-access-hg98t\") pod \"storage-provisioner\" (UID: \"1855f20f-5a70-4e9a-b202-bdc0f046497c\") " pod="kube-system/storage-provisioner"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: I1127 11:37:07.392786    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75wd\" (UniqueName: \"kubernetes.io/projected/c4d42d52-2ac2-435b-a219-96b0b3934f2d-kube-api-access-r75wd\") pod \"coredns-5dd5756b68-4jsq5\" (UID: \"c4d42d52-2ac2-435b-a219-96b0b3934f2d\") " pod="kube-system/coredns-5dd5756b68-4jsq5"
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: W1127 11:37:07.604570    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio-69f8d6719098ba4d278f0efdd4fdaad79d6f7fe167fac693ed2e7f8e6830f582 WatchSource:0}: Error finding container 69f8d6719098ba4d278f0efdd4fdaad79d6f7fe167fac693ed2e7f8e6830f582: Status 404 returned error can't find the container with id 69f8d6719098ba4d278f0efdd4fdaad79d6f7fe167fac693ed2e7f8e6830f582
	Nov 27 11:37:07 multinode-780990 kubelet[1586]: W1127 11:37:07.604846    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio-a3d9b120b17caea7eaca4f017f33b307693fae311f3da106a49777f5e5bb67bc WatchSource:0}: Error finding container a3d9b120b17caea7eaca4f017f33b307693fae311f3da106a49777f5e5bb67bc: Status 404 returned error can't find the container with id a3d9b120b17caea7eaca4f017f33b307693fae311f3da106a49777f5e5bb67bc
	Nov 27 11:37:08 multinode-780990 kubelet[1586]: I1127 11:37:08.178275    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4jsq5" podStartSLOduration=33.178217377 podCreationTimestamp="2023-11-27 11:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 11:37:08.156999483 +0000 UTC m=+45.262498513" watchObservedRunningTime="2023-11-27 11:37:08.178217377 +0000 UTC m=+45.283716409"
	Nov 27 11:37:08 multinode-780990 kubelet[1586]: I1127 11:37:08.178797    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.178756625 podCreationTimestamp="2023-11-27 11:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 11:37:08.177394782 +0000 UTC m=+45.282893810" watchObservedRunningTime="2023-11-27 11:37:08.178756625 +0000 UTC m=+45.284255683"
	Nov 27 11:37:55 multinode-780990 kubelet[1586]: I1127 11:37:55.249646    1586 topology_manager.go:215] "Topology Admit Handler" podUID="81becd92-ba00-4593-8b4c-b3fb4f83d67f" podNamespace="default" podName="busybox-5bc68d56bd-wslrr"
	Nov 27 11:37:55 multinode-780990 kubelet[1586]: I1127 11:37:55.399609    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pxz\" (UniqueName: \"kubernetes.io/projected/81becd92-ba00-4593-8b4c-b3fb4f83d67f-kube-api-access-v8pxz\") pod \"busybox-5bc68d56bd-wslrr\" (UID: \"81becd92-ba00-4593-8b4c-b3fb4f83d67f\") " pod="default/busybox-5bc68d56bd-wslrr"
	Nov 27 11:37:55 multinode-780990 kubelet[1586]: W1127 11:37:55.596757    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio-336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3 WatchSource:0}: Error finding container 336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3: Status 404 returned error can't find the container with id 336a5827ffdb2db6530a94bd54af73dca6c0f9baea5aac7a0dd42554d0abaaa3
	Nov 27 11:37:57 multinode-780990 kubelet[1586]: I1127 11:37:57.250096    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-wslrr" podStartSLOduration=0.930849209 podCreationTimestamp="2023-11-27 11:37:55 +0000 UTC" firstStartedPulling="2023-11-27 11:37:55.600374008 +0000 UTC m=+92.705873029" lastFinishedPulling="2023-11-27 11:37:56.919581388 +0000 UTC m=+94.025080411" observedRunningTime="2023-11-27 11:37:57.249769307 +0000 UTC m=+94.355268337" watchObservedRunningTime="2023-11-27 11:37:57.250056591 +0000 UTC m=+94.355555620"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-780990 -n multinode-780990
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-780990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.256389867.exe start -p running-upgrade-256859 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.256389867.exe start -p running-upgrade-256859 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.416926988s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-256859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-256859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.176855717s)

                                                
                                                
-- stdout --
	* [running-upgrade-256859] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-256859 in cluster running-upgrade-256859
	* Pulling base image ...
	* Updating the running docker "running-upgrade-256859" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:49:47.029610  243697 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:49:47.029782  243697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:49:47.029794  243697 out.go:309] Setting ErrFile to fd 2...
	I1127 11:49:47.029802  243697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:49:47.030050  243697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:49:47.030634  243697 out.go:303] Setting JSON to false
	I1127 11:49:47.032151  243697 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9140,"bootTime":1701076647,"procs":570,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:49:47.032219  243697 start.go:138] virtualization: kvm guest
	I1127 11:49:47.035119  243697 out.go:177] * [running-upgrade-256859] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:49:47.037495  243697 notify.go:220] Checking for updates...
	I1127 11:49:47.037512  243697 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:49:47.039151  243697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:49:47.040744  243697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:49:47.042409  243697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:49:47.043896  243697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:49:47.045756  243697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:49:47.047585  243697 config.go:182] Loaded profile config "running-upgrade-256859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 11:49:47.047607  243697 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 11:49:47.049760  243697 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1127 11:49:47.051051  243697 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:49:47.075918  243697 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:49:47.075999  243697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:49:47.139917  243697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:80 SystemTime:2023-11-27 11:49:47.128434318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:49:47.140036  243697 docker.go:295] overlay module found
	I1127 11:49:47.142333  243697 out.go:177] * Using the docker driver based on existing profile
	I1127 11:49:47.144129  243697 start.go:298] selected driver: docker
	I1127 11:49:47.144148  243697 start.go:902] validating driver "docker" against &{Name:running-upgrade-256859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-256859 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1127 11:49:47.144254  243697 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:49:47.145205  243697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:49:47.225651  243697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:80 SystemTime:2023-11-27 11:49:47.214780054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:49:47.225986  243697 cni.go:84] Creating CNI manager for ""
	I1127 11:49:47.226010  243697 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1127 11:49:47.226024  243697 start_flags.go:323] config:
	{Name:running-upgrade-256859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-256859 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1127 11:49:47.229690  243697 out.go:177] * Starting control plane node running-upgrade-256859 in cluster running-upgrade-256859
	I1127 11:49:47.231220  243697 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:49:47.232809  243697 out.go:177] * Pulling base image ...
	I1127 11:49:47.234626  243697 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1127 11:49:47.234718  243697 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:49:47.254244  243697 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 11:49:47.254269  243697 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	W1127 11:49:47.266807  243697 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1127 11:49:47.266949  243697 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/running-upgrade-256859/config.json ...
	I1127 11:49:47.267009  243697 cache.go:107] acquiring lock: {Name:mk4f97eb860d98aad113203d258b73344e53c511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267084  243697 cache.go:107] acquiring lock: {Name:mka5421340475568c2b43b2f8798dad11817e030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267043  243697 cache.go:107] acquiring lock: {Name:mk5adfc9fa3104d0497e36136a54e204efb78f1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267118  243697 cache.go:107] acquiring lock: {Name:mk429e5724ab879c0ed969b44f07de85293ee406 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267152  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1127 11:49:47.267149  243697 cache.go:107] acquiring lock: {Name:mke76eaaffea8a3cc5e0440afed1826e347a09a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267166  243697 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 128.465µs
	I1127 11:49:47.267173  243697 cache.go:194] Successfully downloaded all kic artifacts
	I1127 11:49:47.267185  243697 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1127 11:49:47.267192  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1127 11:49:47.267009  243697 cache.go:107] acquiring lock: {Name:mk7c18fe3e9f96884546d56f057443deffa42614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267200  243697 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 50.626µs
	I1127 11:49:47.267201  243697 start.go:365] acquiring machines lock for running-upgrade-256859: {Name:mke9a07d8e3313fb6c709140f2155523cf6d7bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267137  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1127 11:49:47.267085  243697 cache.go:107] acquiring lock: {Name:mkc145ca0afebbb70bfeac6f0773a24dc6f98769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267206  243697 cache.go:107] acquiring lock: {Name:mk3a30466c9cbee83b30c62f064af651959c9ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.267207  243697 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1127 11:49:47.267220  243697 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 223.272µs
	I1127 11:49:47.267253  243697 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1127 11:49:47.267259  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1127 11:49:47.267274  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1127 11:49:47.267275  243697 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 70.39µs
	I1127 11:49:47.267286  243697 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1127 11:49:47.267284  243697 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 295.419µs
	I1127 11:49:47.267297  243697 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1127 11:49:47.267193  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1127 11:49:47.267303  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1127 11:49:47.267307  243697 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 191.031µs
	I1127 11:49:47.267316  243697 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1127 11:49:47.267320  243697 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 233.166µs
	I1127 11:49:47.267327  243697 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1127 11:49:47.267343  243697 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1127 11:49:47.267346  243697 start.go:369] acquired machines lock for "running-upgrade-256859" in 128.624µs
	I1127 11:49:47.267366  243697 start.go:96] Skipping create...Using existing machine configuration
	I1127 11:49:47.267365  243697 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 274.799µs
	I1127 11:49:47.267371  243697 fix.go:54] fixHost starting: m01
	I1127 11:49:47.267374  243697 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1127 11:49:47.267382  243697 cache.go:87] Successfully saved all images to host disk.
	I1127 11:49:47.267610  243697 cli_runner.go:164] Run: docker container inspect running-upgrade-256859 --format={{.State.Status}}
	I1127 11:49:47.286216  243697 fix.go:102] recreateIfNeeded on running-upgrade-256859: state=Running err=<nil>
	W1127 11:49:47.286286  243697 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 11:49:47.288403  243697 out.go:177] * Updating the running docker "running-upgrade-256859" container ...
	I1127 11:49:47.289940  243697 machine.go:88] provisioning docker machine ...
	I1127 11:49:47.289985  243697 ubuntu.go:169] provisioning hostname "running-upgrade-256859"
	I1127 11:49:47.290078  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:47.310436  243697 main.go:141] libmachine: Using SSH client type: native
	I1127 11:49:47.310778  243697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32936 <nil> <nil>}
	I1127 11:49:47.310789  243697 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-256859 && echo "running-upgrade-256859" | sudo tee /etc/hostname
	I1127 11:49:47.427988  243697 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-256859
	
	I1127 11:49:47.428083  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:47.446356  243697 main.go:141] libmachine: Using SSH client type: native
	I1127 11:49:47.446880  243697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32936 <nil> <nil>}
	I1127 11:49:47.446907  243697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-256859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-256859/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-256859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:49:47.551327  243697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:49:47.551356  243697 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17644-72381/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-72381/.minikube}
	I1127 11:49:47.551390  243697 ubuntu.go:177] setting up certificates
	I1127 11:49:47.551404  243697 provision.go:83] configureAuth start
	I1127 11:49:47.551462  243697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-256859
	I1127 11:49:47.568792  243697 provision.go:138] copyHostCerts
	I1127 11:49:47.568844  243697 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem, removing ...
	I1127 11:49:47.568851  243697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:49:47.568927  243697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem (1082 bytes)
	I1127 11:49:47.569040  243697 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem, removing ...
	I1127 11:49:47.569052  243697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:49:47.569101  243697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem (1123 bytes)
	I1127 11:49:47.569183  243697 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem, removing ...
	I1127 11:49:47.569194  243697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:49:47.569230  243697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem (1675 bytes)
	I1127 11:49:47.569330  243697 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-256859 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-256859]
	I1127 11:49:47.637592  243697 provision.go:172] copyRemoteCerts
	I1127 11:49:47.637666  243697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:49:47.637711  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:47.654907  243697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/running-upgrade-256859/id_rsa Username:docker}
	I1127 11:49:47.734827  243697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1127 11:49:47.751907  243697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1127 11:49:47.768098  243697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 11:49:47.784420  243697 provision.go:86] duration metric: configureAuth took 232.997772ms
	I1127 11:49:47.784448  243697 ubuntu.go:193] setting minikube options for container-runtime
	I1127 11:49:47.784619  243697 config.go:182] Loaded profile config "running-upgrade-256859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 11:49:47.784726  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:47.804084  243697 main.go:141] libmachine: Using SSH client type: native
	I1127 11:49:47.804459  243697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32936 <nil> <nil>}
	I1127 11:49:47.804488  243697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 11:49:48.248008  243697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 11:49:48.248036  243697 machine.go:91] provisioned docker machine in 958.074728ms
	I1127 11:49:48.248049  243697 start.go:300] post-start starting for "running-upgrade-256859" (driver="docker")
	I1127 11:49:48.248062  243697 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:49:48.248128  243697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:49:48.248176  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:48.264806  243697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/running-upgrade-256859/id_rsa Username:docker}
	I1127 11:49:48.351106  243697 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:49:48.353823  243697 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 11:49:48.353852  243697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 11:49:48.353867  243697 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 11:49:48.353876  243697 info.go:137] Remote host: Ubuntu 19.10
	I1127 11:49:48.353893  243697 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/addons for local assets ...
	I1127 11:49:48.353939  243697 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/files for local assets ...
	I1127 11:49:48.354046  243697 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
	I1127 11:49:48.354181  243697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:49:48.360730  243697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:49:48.377342  243697 start.go:303] post-start completed in 129.277457ms
	I1127 11:49:48.377419  243697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:49:48.377462  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:48.405310  243697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/running-upgrade-256859/id_rsa Username:docker}
	I1127 11:49:48.488964  243697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 11:49:48.493729  243697 fix.go:56] fixHost completed within 1.226350882s
	I1127 11:49:48.493757  243697 start.go:83] releasing machines lock for "running-upgrade-256859", held for 1.226395818s
	I1127 11:49:48.493826  243697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-256859
	I1127 11:49:48.510437  243697 ssh_runner.go:195] Run: cat /version.json
	I1127 11:49:48.510471  243697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:49:48.510505  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:48.510564  243697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256859
	I1127 11:49:48.533140  243697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/running-upgrade-256859/id_rsa Username:docker}
	I1127 11:49:48.533916  243697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/running-upgrade-256859/id_rsa Username:docker}
	W1127 11:49:48.638515  243697 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1127 11:49:48.638591  243697 ssh_runner.go:195] Run: systemctl --version
	I1127 11:49:48.642490  243697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 11:49:48.695907  243697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 11:49:48.700004  243697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:49:48.715079  243697 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 11:49:48.715158  243697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:49:48.738771  243697 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 11:49:48.738795  243697 start.go:472] detecting cgroup driver to use...
	I1127 11:49:48.738823  243697 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 11:49:48.738858  243697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:49:48.761238  243697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:49:48.770489  243697 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:49:48.770548  243697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:49:48.779955  243697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:49:48.788976  243697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1127 11:49:48.799788  243697 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1127 11:49:48.799842  243697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:49:48.908626  243697 docker.go:219] disabling docker service ...
	I1127 11:49:48.908690  243697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:49:48.920739  243697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:49:48.933778  243697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:49:49.020344  243697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:49:49.101808  243697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:49:49.111460  243697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:49:49.124414  243697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 11:49:49.124492  243697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:49:49.135746  243697 out.go:177] 
	W1127 11:49:49.137544  243697 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1127 11:49:49.137587  243697 out.go:239] * 
	* 
	W1127 11:49:49.138816  243697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 11:49:49.140265  243697 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-256859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-27 11:49:49.157902052 +0000 UTC m=+1974.701267299
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-256859
helpers_test.go:235: (dbg) docker inspect running-upgrade-256859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c878a738137bd73927d0881f78658048d998fc4623962ca41eb17ae00585e371",
	        "Created": "2023-11-27T11:48:46.920959973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T11:48:47.389231818Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/c878a738137bd73927d0881f78658048d998fc4623962ca41eb17ae00585e371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c878a738137bd73927d0881f78658048d998fc4623962ca41eb17ae00585e371/hostname",
	        "HostsPath": "/var/lib/docker/containers/c878a738137bd73927d0881f78658048d998fc4623962ca41eb17ae00585e371/hosts",
	        "LogPath": "/var/lib/docker/containers/c878a738137bd73927d0881f78658048d998fc4623962ca41eb17ae00585e371/c878a738137bd73927d0881f78658048d998fc4623962ca41eb17ae00585e371-json.log",
	        "Name": "/running-upgrade-256859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-256859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/148d350ac1a9250f4cb4a06ad750fc67ee1beef45276b46772cf8c1acda8f36d-init/diff:/var/lib/docker/overlay2/d63e3c3f9d986cf92c53c38f5babd364ac5ac3293f77f0fdceb5cd11e0fcf02f/diff:/var/lib/docker/overlay2/8e13ed9b68b076fc9f5e86a41d4e737aa4a5c2e4c2fe90015feaf99a60d3d2ce/diff:/var/lib/docker/overlay2/20f6a01697958c3f695f077036d740b35385802ba8df23af3ce30bdce9148fb7/diff:/var/lib/docker/overlay2/91150ed45fa9244e1bb3172bdca90d4c786372def6bb3267445ac0cf56af5a8a/diff:/var/lib/docker/overlay2/eacffcbf0176f936f62dcca82dd0ba20ab28fb2f6c62da15d5f6cabf624330cd/diff:/var/lib/docker/overlay2/1938fc42d041bc2a6aed60f3a730bc5897b1689047ee6f8ca88707903fe01a28/diff:/var/lib/docker/overlay2/b502dd964af3149160036f92e59eeb40c42a3a9aca957a723ef007b8cd48b5a7/diff:/var/lib/docker/overlay2/df1a873e09091e1d5b2912bd3d89334fa00cc1635b3c1cfd416e5c6d8c38cf2a/diff:/var/lib/docker/overlay2/49110b63529898c015c69e6c6585a3f5370750cd4824a8bbc7c282b2483b2644/diff:/var/lib/docker/overlay2/1a5084
c11cf4fe979f47928f763e0e5afa0da9a167aecc4c8020777b7548c576/diff:/var/lib/docker/overlay2/0dd95ee9b767e53c71a431832992757450d4e740ae347be210e5f2f05b745184/diff:/var/lib/docker/overlay2/f721a05ac6ad5ead65eb300fe7cda1ae8b35f3b0c3c650f8a63461027bebfbb9/diff:/var/lib/docker/overlay2/91adb85931d2d403544c34e80f317a20c2a5b48ef45034d01658a41907a7c3f8/diff:/var/lib/docker/overlay2/55c501a11f8aaeccf19483687e559b5231e4615363d55a3ed95e3483115b4e80/diff:/var/lib/docker/overlay2/7acf0015faa9aa038d1807068eab01e42343760584ee3dc46578835a28f39ae5/diff:/var/lib/docker/overlay2/920d52c2bd1d03fd6bd7de44146a875f2707351db3f526999f6a0d45bba529f5/diff:/var/lib/docker/overlay2/ee22daa5f590c5c068c0edd844656c99a52ba34b15014211aa6aef3ba4096d9b/diff:/var/lib/docker/overlay2/33b81559a9e139d056840db2b58193954caa72e80558ed6c5003a4182811279a/diff:/var/lib/docker/overlay2/2d8be6e7ae21edf853191eb0f0ac1feaee75f6323f79b9c581f17bd18eed71a1/diff:/var/lib/docker/overlay2/e862734a38559eb31c4e51ac26be2851ce366e98014ae157235d0eadff83bfc8/diff:/var/lib/d
ocker/overlay2/3ec85e0b81b6f1e0c6b213b1ca2a96c0e0fcb808414039886b3b720ae2dc1be3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/148d350ac1a9250f4cb4a06ad750fc67ee1beef45276b46772cf8c1acda8f36d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/148d350ac1a9250f4cb4a06ad750fc67ee1beef45276b46772cf8c1acda8f36d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/148d350ac1a9250f4cb4a06ad750fc67ee1beef45276b46772cf8c1acda8f36d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-256859",
	                "Source": "/var/lib/docker/volumes/running-upgrade-256859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-256859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-256859",
	                "name.minikube.sigs.k8s.io": "running-upgrade-256859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95275b56ad2d0090e5f6db5a39d919f3f9cfe791e3afd1bc32410e63422ee1ae",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32935"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/95275b56ad2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b51d3fe0b318a8e8d6475ad4abc4be0ec6a7f3fbd67a4d85165c26c47c89ebb2",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "989551f45deedf62c316116e5e8d4ba5ddcd127c7b6920e295de8f5f6803c4a5",
	                    "EndpointID": "b51d3fe0b318a8e8d6475ad4abc4be0ec6a7f3fbd67a4d85165c26c47c89ebb2",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-256859 -n running-upgrade-256859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-256859 -n running-upgrade-256859: exit status 4 (303.847609ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:49:49.446967  244634 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-256859" does not appear in /home/jenkins/minikube-integration/17644-72381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-256859" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-256859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-256859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-256859: (2.730437523s)
--- FAIL: TestRunningBinaryUpgrade (66.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.134666558.exe start -p stopped-upgrade-148287 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.134666558.exe start -p stopped-upgrade-148287 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m33.685382259s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.134666558.exe -p stopped-upgrade-148287 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.134666558.exe -p stopped-upgrade-148287 stop: (2.319063912s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-148287 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-148287 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.519901997s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-148287] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-148287 in cluster stopped-upgrade-148287
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-148287" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:48:55.769684  232510 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:48:55.769837  232510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:48:55.769847  232510 out.go:309] Setting ErrFile to fd 2...
	I1127 11:48:55.769855  232510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:48:55.770127  232510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:48:55.770732  232510 out.go:303] Setting JSON to false
	I1127 11:48:55.772235  232510 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9089,"bootTime":1701076647,"procs":522,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:48:55.772304  232510 start.go:138] virtualization: kvm guest
	I1127 11:48:55.774804  232510 out.go:177] * [stopped-upgrade-148287] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:48:55.776767  232510 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:48:55.776838  232510 notify.go:220] Checking for updates...
	I1127 11:48:55.779525  232510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:48:55.780883  232510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:48:55.782271  232510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:48:55.783618  232510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:48:55.785462  232510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:48:55.787223  232510 config.go:182] Loaded profile config "stopped-upgrade-148287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 11:48:55.787247  232510 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 11:48:55.789101  232510 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1127 11:48:55.790408  232510 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:48:55.813398  232510 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:48:55.813480  232510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:48:55.869879  232510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:91 SystemTime:2023-11-27 11:48:55.861212114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:48:55.869977  232510 docker.go:295] overlay module found
	I1127 11:48:55.871875  232510 out.go:177] * Using the docker driver based on existing profile
	I1127 11:48:55.873296  232510 start.go:298] selected driver: docker
	I1127 11:48:55.873312  232510 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-148287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-148287 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1127 11:48:55.873449  232510 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:48:55.874208  232510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:48:55.931979  232510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:91 SystemTime:2023-11-27 11:48:55.92383797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:48:55.932318  232510 cni.go:84] Creating CNI manager for ""
	I1127 11:48:55.932345  232510 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1127 11:48:55.932359  232510 start_flags.go:323] config:
	{Name:stopped-upgrade-148287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-148287 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1127 11:48:55.934428  232510 out.go:177] * Starting control plane node stopped-upgrade-148287 in cluster stopped-upgrade-148287
	I1127 11:48:55.935822  232510 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:48:55.937239  232510 out.go:177] * Pulling base image ...
	I1127 11:48:55.938529  232510 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1127 11:48:55.938558  232510 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:48:55.954399  232510 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 11:48:55.954422  232510 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	W1127 11:48:55.964671  232510 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1127 11:48:55.964857  232510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/stopped-upgrade-148287/config.json ...
	I1127 11:48:55.964912  232510 cache.go:107] acquiring lock: {Name:mk4f97eb860d98aad113203d258b73344e53c511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.964968  232510 cache.go:107] acquiring lock: {Name:mk3a30466c9cbee83b30c62f064af651959c9ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.964995  232510 cache.go:107] acquiring lock: {Name:mk5adfc9fa3104d0497e36136a54e204efb78f1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.965039  232510 cache.go:115] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1127 11:48:55.965022  232510 cache.go:107] acquiring lock: {Name:mka5421340475568c2b43b2f8798dad11817e030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.965051  232510 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.502µs
	I1127 11:48:55.965046  232510 cache.go:107] acquiring lock: {Name:mkc145ca0afebbb70bfeac6f0773a24dc6f98769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.965072  232510 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1127 11:48:55.965086  232510 cache.go:107] acquiring lock: {Name:mk429e5724ab879c0ed969b44f07de85293ee406 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.964961  232510 cache.go:107] acquiring lock: {Name:mke76eaaffea8a3cc5e0440afed1826e347a09a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.965085  232510 cache.go:107] acquiring lock: {Name:mk7c18fe3e9f96884546d56f057443deffa42614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.965105  232510 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1127 11:48:55.965142  232510 cache.go:194] Successfully downloaded all kic artifacts
	I1127 11:48:55.965177  232510 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1127 11:48:55.965186  232510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1127 11:48:55.965205  232510 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1127 11:48:55.965226  232510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1127 11:48:55.965234  232510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1127 11:48:55.965176  232510 start.go:365] acquiring machines lock for stopped-upgrade-148287: {Name:mk611793a052b7966eb20968b3efef86182ba49a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:48:55.965319  232510 start.go:369] acquired machines lock for "stopped-upgrade-148287" in 48.944µs
	I1127 11:48:55.965134  232510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1127 11:48:55.965345  232510 start.go:96] Skipping create...Using existing machine configuration
	I1127 11:48:55.965353  232510 fix.go:54] fixHost starting: m01
	I1127 11:48:55.965646  232510 cli_runner.go:164] Run: docker container inspect stopped-upgrade-148287 --format={{.State.Status}}
	I1127 11:48:55.966082  232510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1127 11:48:55.966254  232510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1127 11:48:55.966258  232510 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1127 11:48:55.966258  232510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1127 11:48:55.966275  232510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1127 11:48:55.966329  232510 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1127 11:48:55.966333  232510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1127 11:48:55.985237  232510 fix.go:102] recreateIfNeeded on stopped-upgrade-148287: state=Stopped err=<nil>
	W1127 11:48:55.985261  232510 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 11:48:55.987362  232510 out.go:177] * Restarting existing docker container for "stopped-upgrade-148287" ...
	I1127 11:48:55.988728  232510 cli_runner.go:164] Run: docker start stopped-upgrade-148287
	I1127 11:48:56.117334  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1127 11:48:56.149082  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1127 11:48:56.150904  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1127 11:48:56.153483  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1127 11:48:56.156991  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1127 11:48:56.159865  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1127 11:48:56.196330  232510 cache.go:162] opening:  /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1127 11:48:56.241626  232510 cli_runner.go:164] Run: docker container inspect stopped-upgrade-148287 --format={{.State.Status}}
	I1127 11:48:56.252697  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1127 11:48:56.252729  232510 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 287.72188ms
	I1127 11:48:56.252745  232510 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1127 11:48:56.267244  232510 kic.go:430] container "stopped-upgrade-148287" state is running.
	I1127 11:48:56.267741  232510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-148287
	I1127 11:48:56.288054  232510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/stopped-upgrade-148287/config.json ...
	I1127 11:48:56.288562  232510 machine.go:88] provisioning docker machine ...
	I1127 11:48:56.288613  232510 ubuntu.go:169] provisioning hostname "stopped-upgrade-148287"
	I1127 11:48:56.288681  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:48:56.314252  232510 main.go:141] libmachine: Using SSH client type: native
	I1127 11:48:56.314795  232510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32939 <nil> <nil>}
	I1127 11:48:56.314816  232510 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-148287 && echo "stopped-upgrade-148287" | sudo tee /etc/hostname
	I1127 11:48:56.315660  232510 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35302->127.0.0.1:32939: read: connection reset by peer
	I1127 11:48:56.590545  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1127 11:48:56.590573  232510 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 625.487224ms
	I1127 11:48:56.590595  232510 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1127 11:48:56.949138  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1127 11:48:56.949172  232510 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 984.112061ms
	I1127 11:48:56.949189  232510 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1127 11:48:57.122219  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1127 11:48:57.122255  232510 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.157221245s
	I1127 11:48:57.122275  232510 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1127 11:48:57.152490  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1127 11:48:57.152515  232510 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.187520839s
	I1127 11:48:57.152528  232510 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1127 11:48:57.462829  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1127 11:48:57.462862  232510 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.497899085s
	I1127 11:48:57.462879  232510 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1127 11:48:57.542303  232510 cache.go:157] /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1127 11:48:57.542335  232510 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.577384255s
	I1127 11:48:57.542351  232510 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1127 11:48:57.542395  232510 cache.go:87] Successfully saved all images to host disk.
	I1127 11:48:59.431944  232510 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-148287
	
	I1127 11:48:59.432031  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:48:59.452444  232510 main.go:141] libmachine: Using SSH client type: native
	I1127 11:48:59.452837  232510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32939 <nil> <nil>}
	I1127 11:48:59.452865  232510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-148287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-148287/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-148287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:48:59.559650  232510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:48:59.559702  232510 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17644-72381/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-72381/.minikube}
	I1127 11:48:59.559740  232510 ubuntu.go:177] setting up certificates
	I1127 11:48:59.559757  232510 provision.go:83] configureAuth start
	I1127 11:48:59.559827  232510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-148287
	I1127 11:48:59.579043  232510 provision.go:138] copyHostCerts
	I1127 11:48:59.579110  232510 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem, removing ...
	I1127 11:48:59.579126  232510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem
	I1127 11:48:59.579201  232510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/ca.pem (1082 bytes)
	I1127 11:48:59.579325  232510 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem, removing ...
	I1127 11:48:59.579338  232510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem
	I1127 11:48:59.579380  232510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/cert.pem (1123 bytes)
	I1127 11:48:59.579464  232510 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem, removing ...
	I1127 11:48:59.579473  232510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem
	I1127 11:48:59.579496  232510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-72381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-72381/.minikube/key.pem (1675 bytes)
	I1127 11:48:59.579575  232510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-148287 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-148287]
	I1127 11:48:59.934041  232510 provision.go:172] copyRemoteCerts
	I1127 11:48:59.934119  232510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:48:59.934165  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:48:59.951010  232510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/stopped-upgrade-148287/id_rsa Username:docker}
	I1127 11:49:00.030796  232510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1127 11:49:00.048260  232510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1127 11:49:00.065690  232510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 11:49:00.082465  232510 provision.go:86] duration metric: configureAuth took 522.686841ms
	I1127 11:49:00.082501  232510 ubuntu.go:193] setting minikube options for container-runtime
	I1127 11:49:00.082718  232510 config.go:182] Loaded profile config "stopped-upgrade-148287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 11:49:00.082854  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:49:00.101417  232510 main.go:141] libmachine: Using SSH client type: native
	I1127 11:49:00.101769  232510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 127.0.0.1 32939 <nil> <nil>}
	I1127 11:49:00.101789  232510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 11:49:01.405333  232510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 11:49:01.405368  232510 machine.go:91] provisioned docker machine in 5.11676728s
	I1127 11:49:01.405382  232510 start.go:300] post-start starting for "stopped-upgrade-148287" (driver="docker")
	I1127 11:49:01.405399  232510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:49:01.405481  232510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:49:01.405531  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:49:01.423400  232510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/stopped-upgrade-148287/id_rsa Username:docker}
	I1127 11:49:01.503452  232510 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:49:01.506370  232510 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 11:49:01.506406  232510 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 11:49:01.506419  232510 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 11:49:01.506429  232510 info.go:137] Remote host: Ubuntu 19.10
	I1127 11:49:01.506446  232510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/addons for local assets ...
	I1127 11:49:01.506553  232510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-72381/.minikube/files for local assets ...
	I1127 11:49:01.506655  232510 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
	I1127 11:49:01.506779  232510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:49:01.513494  232510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
	I1127 11:49:01.531912  232510 start.go:303] post-start completed in 126.511047ms
	I1127 11:49:01.531995  232510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:49:01.532048  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:49:01.553172  232510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/stopped-upgrade-148287/id_rsa Username:docker}
	I1127 11:49:01.632825  232510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 11:49:01.637210  232510 fix.go:56] fixHost completed within 5.671849098s
	I1127 11:49:01.637240  232510 start.go:83] releasing machines lock for "stopped-upgrade-148287", held for 5.67190983s
	I1127 11:49:01.637312  232510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-148287
	I1127 11:49:01.656475  232510 ssh_runner.go:195] Run: cat /version.json
	I1127 11:49:01.656543  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:49:01.656485  232510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:49:01.656626  232510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-148287
	I1127 11:49:01.675842  232510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/stopped-upgrade-148287/id_rsa Username:docker}
	I1127 11:49:01.676913  232510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/stopped-upgrade-148287/id_rsa Username:docker}
	W1127 11:49:01.755004  232510 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1127 11:49:01.755078  232510 ssh_runner.go:195] Run: systemctl --version
	I1127 11:49:01.759055  232510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 11:49:01.813518  232510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 11:49:01.817915  232510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:49:01.835491  232510 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 11:49:01.835570  232510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:49:01.862767  232510 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 11:49:01.862794  232510 start.go:472] detecting cgroup driver to use...
	I1127 11:49:01.862832  232510 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 11:49:01.862882  232510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:49:01.886224  232510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:49:01.896675  232510 docker.go:203] disabling cri-docker service (if available) ...
	I1127 11:49:01.896744  232510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 11:49:01.907292  232510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 11:49:01.917551  232510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1127 11:49:01.927390  232510 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1127 11:49:01.927460  232510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 11:49:02.003256  232510 docker.go:219] disabling docker service ...
	I1127 11:49:02.003370  232510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 11:49:02.013978  232510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 11:49:02.025986  232510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 11:49:02.095743  232510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 11:49:02.182920  232510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 11:49:02.193531  232510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:49:02.208733  232510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 11:49:02.208805  232510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 11:49:02.220384  232510 out.go:177] 
	W1127 11:49:02.222115  232510 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1127 11:49:02.222145  232510 out.go:239] * 
	* 
	W1127 11:49:02.223132  232510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 11:49:02.225260  232510 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-148287 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (102.53s)

                                                
                                    

Test pass (277/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.06
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 6.28
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.21
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
18 TestDownloadOnlyKic 1.28
19 TestBinaryMirror 0.72
20 TestOffline 86.92
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 154.18
27 TestAddons/parallel/Registry 15.1
29 TestAddons/parallel/InspektorGadget 10.98
30 TestAddons/parallel/MetricsServer 6.15
31 TestAddons/parallel/HelmTiller 12.07
33 TestAddons/parallel/CSI 48.4
34 TestAddons/parallel/Headlamp 13.51
35 TestAddons/parallel/CloudSpanner 5.61
36 TestAddons/parallel/LocalPath 12.75
37 TestAddons/parallel/NvidiaDevicePlugin 5.52
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/StoppedEnableDisable 12.19
42 TestCertOptions 31.65
43 TestCertExpiration 227.62
45 TestForceSystemdFlag 31.95
46 TestForceSystemdEnv 25.1
48 TestKVMDriverInstallOrUpdate 2.91
52 TestErrorSpam/setup 22.01
53 TestErrorSpam/start 0.64
54 TestErrorSpam/status 0.88
55 TestErrorSpam/pause 1.48
56 TestErrorSpam/unpause 1.48
57 TestErrorSpam/stop 1.4
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 40.64
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 42.51
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.53
69 TestFunctional/serial/CacheCmd/cache/add_local 1.62
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 32.81
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.35
80 TestFunctional/serial/LogsFileCmd 1.35
81 TestFunctional/serial/InvalidService 3.99
84 TestFunctional/parallel/DashboardCmd 12.5
85 TestFunctional/parallel/DryRun 0.44
86 TestFunctional/parallel/InternationalLanguage 0.19
87 TestFunctional/parallel/StatusCmd 1.03
91 TestFunctional/parallel/ServiceCmdConnect 12.61
92 TestFunctional/parallel/AddonsCmd 0.19
93 TestFunctional/parallel/PersistentVolumeClaim 33.15
95 TestFunctional/parallel/SSHCmd 0.71
96 TestFunctional/parallel/CpCmd 1.28
97 TestFunctional/parallel/MySQL 20.93
98 TestFunctional/parallel/FileSync 0.29
99 TestFunctional/parallel/CertSync 1.9
103 TestFunctional/parallel/NodeLabels 0.06
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
107 TestFunctional/parallel/License 0.16
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.37
113 TestFunctional/parallel/ServiceCmd/DeployApp 12.15
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.03
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
121 TestFunctional/parallel/ProfileCmd/profile_list 0.37
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
123 TestFunctional/parallel/MountCmd/any-port 6.95
124 TestFunctional/parallel/ServiceCmd/List 0.54
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
127 TestFunctional/parallel/ServiceCmd/Format 0.38
128 TestFunctional/parallel/ServiceCmd/URL 0.35
129 TestFunctional/parallel/Version/short 0.1
130 TestFunctional/parallel/Version/components 0.65
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
135 TestFunctional/parallel/ImageCommands/ImageBuild 2.33
136 TestFunctional/parallel/ImageCommands/Setup 1.24
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.91
138 TestFunctional/parallel/MountCmd/specific-port 1.88
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.78
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.11
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.79
149 TestFunctional/delete_addon-resizer_images 0.07
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 83.88
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.28
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
162 TestJSONOutput/start/Command 66.55
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.68
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.6
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.79
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.23
187 TestKicCustomNetwork/create_custom_network 30.79
188 TestKicCustomNetwork/use_default_bridge_network 23.33
189 TestKicExistingNetwork 26.44
190 TestKicCustomSubnet 24.69
191 TestKicStaticIP 25.22
192 TestMainNoArgs 0.06
193 TestMinikubeProfile 45.51
196 TestMountStart/serial/StartWithMountFirst 8.27
197 TestMountStart/serial/VerifyMountFirst 0.25
198 TestMountStart/serial/StartWithMountSecond 5.4
199 TestMountStart/serial/VerifyMountSecond 0.25
200 TestMountStart/serial/DeleteFirst 1.61
201 TestMountStart/serial/VerifyMountPostDelete 0.25
202 TestMountStart/serial/Stop 1.21
203 TestMountStart/serial/RestartStopped 7
204 TestMountStart/serial/VerifyMountPostStop 0.26
207 TestMultiNode/serial/FreshStart2Nodes 112.61
208 TestMultiNode/serial/DeployApp2Nodes 4.62
210 TestMultiNode/serial/AddNode 49.77
211 TestMultiNode/serial/ProfileList 0.29
212 TestMultiNode/serial/CopyFile 9.73
213 TestMultiNode/serial/StopNode 2.19
214 TestMultiNode/serial/StartAfterStop 11.07
215 TestMultiNode/serial/RestartKeepsNodes 109.61
216 TestMultiNode/serial/DeleteNode 4.74
217 TestMultiNode/serial/StopMultiNode 23.91
218 TestMultiNode/serial/RestartMultiNode 76.58
219 TestMultiNode/serial/ValidateNameConflict 26.51
224 TestPreload 122.1
226 TestScheduledStopUnix 100.78
229 TestInsufficientStorage 13.19
232 TestKubernetesUpgrade 365.89
233 TestMissingContainerUpgrade 171.18
234 TestStoppedBinaryUpgrade/Setup 0.46
236 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
245 TestPause/serial/Start 43.65
246 TestPause/serial/SecondStartNoReconfiguration 44
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
249 TestNoKubernetes/serial/StartWithK8s 24.06
257 TestNetworkPlugins/group/false 4.23
258 TestNoKubernetes/serial/StartWithStopK8s 26.72
262 TestPause/serial/Pause 0.71
263 TestPause/serial/VerifyStatus 0.31
264 TestPause/serial/Unpause 0.66
265 TestPause/serial/PauseAgain 0.81
266 TestPause/serial/DeletePaused 2.71
267 TestPause/serial/VerifyDeletedResources 28.54
268 TestNoKubernetes/serial/Start 7.49
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
270 TestNoKubernetes/serial/ProfileList 14.8
271 TestNoKubernetes/serial/Stop 1.23
272 TestNoKubernetes/serial/StartNoArgs 6.73
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
275 TestStartStop/group/old-k8s-version/serial/FirstStart 120.66
277 TestStartStop/group/no-preload/serial/FirstStart 54.26
278 TestStartStop/group/no-preload/serial/DeployApp 8.39
279 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
280 TestStartStop/group/no-preload/serial/Stop 12.01
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
282 TestStartStop/group/no-preload/serial/SecondStart 333.24
283 TestStartStop/group/old-k8s-version/serial/DeployApp 9.53
285 TestStartStop/group/embed-certs/serial/FirstStart 66.82
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
287 TestStartStop/group/old-k8s-version/serial/Stop 11.99
288 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
289 TestStartStop/group/old-k8s-version/serial/SecondStart 64.15
290 TestStartStop/group/embed-certs/serial/DeployApp 9.41
291 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
292 TestStartStop/group/embed-certs/serial/Stop 11.96
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
295 TestStartStop/group/embed-certs/serial/SecondStart 336.2
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.69
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
300 TestStartStop/group/old-k8s-version/serial/Pause 3.39
302 TestStartStop/group/newest-cni/serial/FirstStart 36.57
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
305 TestStartStop/group/newest-cni/serial/Stop 1.21
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/newest-cni/serial/SecondStart 25.95
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.47
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
312 TestStartStop/group/newest-cni/serial/Pause 2.53
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
315 TestNetworkPlugins/group/auto/Start 71.42
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.31
318 TestNetworkPlugins/group/auto/KubeletFlags 0.27
319 TestNetworkPlugins/group/auto/NetCatPod 10.29
320 TestNetworkPlugins/group/auto/DNS 0.15
321 TestNetworkPlugins/group/auto/Localhost 0.13
322 TestNetworkPlugins/group/auto/HairPin 0.13
323 TestNetworkPlugins/group/kindnet/Start 45.23
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
325 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
327 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
328 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
330 TestStartStop/group/no-preload/serial/Pause 2.68
331 TestNetworkPlugins/group/kindnet/DNS 0.16
332 TestNetworkPlugins/group/kindnet/Localhost 0.2
333 TestNetworkPlugins/group/kindnet/HairPin 0.12
334 TestNetworkPlugins/group/calico/Start 64.31
335 TestNetworkPlugins/group/custom-flannel/Start 61.83
336 TestNetworkPlugins/group/calico/ControllerPod 5.03
337 TestNetworkPlugins/group/calico/KubeletFlags 0.33
338 TestNetworkPlugins/group/calico/NetCatPod 10.32
339 TestNetworkPlugins/group/calico/DNS 0.16
340 TestNetworkPlugins/group/calico/Localhost 0.14
341 TestNetworkPlugins/group/calico/HairPin 0.14
342 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
343 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
344 TestNetworkPlugins/group/custom-flannel/DNS 0.2
345 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
346 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
347 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.02
348 TestNetworkPlugins/group/enable-default-cni/Start 44.21
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
350 TestNetworkPlugins/group/flannel/Start 61.84
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
352 TestStartStop/group/embed-certs/serial/Pause 3.32
353 TestNetworkPlugins/group/bridge/Start 79.52
354 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
355 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
356 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
357 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
358 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
359 TestNetworkPlugins/group/flannel/ControllerPod 5.02
360 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
361 TestNetworkPlugins/group/flannel/NetCatPod 10.23
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.02
363 TestNetworkPlugins/group/flannel/DNS 0.17
364 TestNetworkPlugins/group/flannel/Localhost 0.15
365 TestNetworkPlugins/group/flannel/HairPin 0.15
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
368 TestNetworkPlugins/group/bridge/NetCatPod 10.27
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
371 TestNetworkPlugins/group/bridge/DNS 0.16
372 TestNetworkPlugins/group/bridge/Localhost 0.14
373 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (7.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-281039 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-281039 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.06357645s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-281039
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-281039: exit status 85 (90.372781ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-281039 | jenkins | v1.32.0 | 27 Nov 23 11:16 UTC |          |
	|         | -p download-only-281039        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:16:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:16:54.562692   79165 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:16:54.562807   79165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:16:54.562816   79165 out.go:309] Setting ErrFile to fd 2...
	I1127 11:16:54.562821   79165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:16:54.563011   79165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	W1127 11:16:54.563143   79165 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17644-72381/.minikube/config/config.json: open /home/jenkins/minikube-integration/17644-72381/.minikube/config/config.json: no such file or directory
	I1127 11:16:54.563774   79165 out.go:303] Setting JSON to true
	I1127 11:16:54.564619   79165 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7168,"bootTime":1701076647,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:16:54.564679   79165 start.go:138] virtualization: kvm guest
	I1127 11:16:54.567275   79165 out.go:97] [download-only-281039] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:16:54.568973   79165 out.go:169] MINIKUBE_LOCATION=17644
	W1127 11:16:54.567420   79165 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 11:16:54.567475   79165 notify.go:220] Checking for updates...
	I1127 11:16:54.571941   79165 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:16:54.573512   79165 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:16:54.575079   79165 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:16:54.576846   79165 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 11:16:54.579772   79165 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 11:16:54.580022   79165 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:16:54.603040   79165 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:16:54.603127   79165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:16:54.962614   79165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-11-27 11:16:54.953907139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:16:54.962739   79165 docker.go:295] overlay module found
	I1127 11:16:54.964753   79165 out.go:97] Using the docker driver based on user configuration
	I1127 11:16:54.964779   79165 start.go:298] selected driver: docker
	I1127 11:16:54.964786   79165 start.go:902] validating driver "docker" against <nil>
	I1127 11:16:54.964983   79165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:16:55.018490   79165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-11-27 11:16:55.010522052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:16:55.018677   79165 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 11:16:55.019391   79165 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1127 11:16:55.019576   79165 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 11:16:55.021703   79165 out.go:169] Using Docker driver with root privileges
	I1127 11:16:55.023100   79165 cni.go:84] Creating CNI manager for ""
	I1127 11:16:55.023120   79165 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:16:55.023131   79165 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 11:16:55.023142   79165 start_flags.go:323] config:
	{Name:download-only-281039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-281039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:16:55.024851   79165 out.go:97] Starting control plane node download-only-281039 in cluster download-only-281039
	I1127 11:16:55.024874   79165 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:16:55.026349   79165 out.go:97] Pulling base image ...
	I1127 11:16:55.026371   79165 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 11:16:55.026509   79165 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:16:55.041556   79165 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 11:16:55.041766   79165 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 11:16:55.041845   79165 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 11:16:55.065258   79165 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1127 11:16:55.065285   79165 cache.go:56] Caching tarball of preloaded images
	I1127 11:16:55.065430   79165 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 11:16:55.067470   79165 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 11:16:55.067488   79165 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:16:55.095213   79165 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1127 11:16:58.913975   79165 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:16:58.914067   79165 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-281039"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-281039 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-281039 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.281254012s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-281039
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-281039: exit status 85 (78.265449ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-281039 | jenkins | v1.32.0 | 27 Nov 23 11:16 UTC |          |
	|         | -p download-only-281039        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-281039 | jenkins | v1.32.0 | 27 Nov 23 11:17 UTC |          |
	|         | -p download-only-281039        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:17:01
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:17:01.725347   79316 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:17:01.725660   79316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:17:01.725672   79316 out.go:309] Setting ErrFile to fd 2...
	I1127 11:17:01.725680   79316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:17:01.725906   79316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	W1127 11:17:01.726045   79316 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17644-72381/.minikube/config/config.json: open /home/jenkins/minikube-integration/17644-72381/.minikube/config/config.json: no such file or directory
	I1127 11:17:01.726514   79316 out.go:303] Setting JSON to true
	I1127 11:17:01.727431   79316 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7175,"bootTime":1701076647,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:17:01.727508   79316 start.go:138] virtualization: kvm guest
	I1127 11:17:01.730148   79316 out.go:97] [download-only-281039] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:17:01.732220   79316 out.go:169] MINIKUBE_LOCATION=17644
	I1127 11:17:01.730444   79316 notify.go:220] Checking for updates...
	I1127 11:17:01.736334   79316 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:17:01.738407   79316 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:17:01.740656   79316 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:17:01.742615   79316 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 11:17:01.745901   79316 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 11:17:01.746643   79316 config.go:182] Loaded profile config "download-only-281039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1127 11:17:01.746725   79316 start.go:810] api.Load failed for download-only-281039: filestore "download-only-281039": Docker machine "download-only-281039" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 11:17:01.746853   79316 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 11:17:01.746903   79316 start.go:810] api.Load failed for download-only-281039: filestore "download-only-281039": Docker machine "download-only-281039" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 11:17:01.772341   79316 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:17:01.772608   79316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:17:01.832769   79316 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 11:17:01.82425689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:17:01.832868   79316 docker.go:295] overlay module found
	I1127 11:17:01.835097   79316 out.go:97] Using the docker driver based on existing profile
	I1127 11:17:01.835139   79316 start.go:298] selected driver: docker
	I1127 11:17:01.835146   79316 start.go:902] validating driver "docker" against &{Name:download-only-281039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-281039 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:17:01.835312   79316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:17:01.889800   79316 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 11:17:01.88086783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:17:01.890453   79316 cni.go:84] Creating CNI manager for ""
	I1127 11:17:01.890477   79316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 11:17:01.890485   79316 start_flags.go:323] config:
	{Name:download-only-281039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-281039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1127 11:17:01.892683   79316 out.go:97] Starting control plane node download-only-281039 in cluster download-only-281039
	I1127 11:17:01.892704   79316 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 11:17:01.894581   79316 out.go:97] Pulling base image ...
	I1127 11:17:01.894622   79316 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:17:01.894724   79316 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 11:17:01.911624   79316 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 11:17:01.911768   79316 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 11:17:01.911786   79316 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 11:17:01.911791   79316 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 11:17:01.911801   79316 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 11:17:01.925739   79316 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 11:17:01.925773   79316 cache.go:56] Caching tarball of preloaded images
	I1127 11:17:01.925907   79316 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 11:17:01.928086   79316 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1127 11:17:01.928120   79316 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:17:01.956715   79316 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 11:17:06.278164   79316 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1127 11:17:06.278291   79316 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17644-72381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-281039"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-281039
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.28s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-213707 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-213707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-213707
--- PASS: TestDownloadOnlyKic (1.28s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-041707 --alsologtostderr --binary-mirror http://127.0.0.1:42111 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-041707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-041707
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (86.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-977610 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-977610 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m24.4746977s)
helpers_test.go:175: Cleaning up "offline-crio-977610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-977610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-977610: (2.447236664s)
--- PASS: TestOffline (86.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-112776
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-112776: exit status 85 (65.725218ms)

                                                
                                                
-- stdout --
	* Profile "addons-112776" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-112776"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-112776
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-112776: exit status 85 (66.652068ms)

                                                
                                                
-- stdout --
	* Profile "addons-112776" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-112776"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (154.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-112776 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-112776 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m34.182895258s)
--- PASS: TestAddons/Setup (154.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 27.079575ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lmltk" [2af7cf8b-6b3b-4728-be19-f6cb5e9d7195] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010954215s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gcphz" [7d830023-3b79-4d5f-b0ed-5cd31be11e05] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01300177s
addons_test.go:339: (dbg) Run:  kubectl --context addons-112776 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-112776 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-112776 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.981224353s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 ip
2023/11/27 11:19:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x9qkp" [1f82e3a7-7627-4f30-ba97-524d1f3911c7] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.071164202s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-112776
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-112776: (5.909535422s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.523563ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-fj2dt" [89070c3d-2674-4c90-8ea8-0fd92dd023df] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.024255878s
addons_test.go:414: (dbg) Run:  kubectl --context addons-112776 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-112776 addons disable metrics-server --alsologtostderr -v=1: (1.037090556s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.472561ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-jlvnj" [02d82240-a512-4369-8110-df7c8846c5b5] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012516289s
addons_test.go:472: (dbg) Run:  kubectl --context addons-112776 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-112776 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.450952876s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 10.087137ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-112776 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-112776 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3226300b-5c41-401d-a315-3752e17b8593] Pending
helpers_test.go:344: "task-pv-pod" [3226300b-5c41-401d-a315-3752e17b8593] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3226300b-5c41-401d-a315-3752e17b8593] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.008937771s
addons_test.go:583: (dbg) Run:  kubectl --context addons-112776 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-112776 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-112776 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-112776 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-112776 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-112776 delete pod task-pv-pod: (1.187312665s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-112776 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-112776 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-112776 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fb720b62-c0b4-424e-8cd0-ec9dcf5f7f38] Pending
helpers_test.go:344: "task-pv-pod-restore" [fb720b62-c0b4-424e-8cd0-ec9dcf5f7f38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fb720b62-c0b4-424e-8cd0-ec9dcf5f7f38] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008382858s
addons_test.go:625: (dbg) Run:  kubectl --context addons-112776 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-112776 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-112776 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-112776 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.557894826s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-112776 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-112776 --alsologtostderr -v=1: (1.504325884s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-qjdms" [de948697-a423-4c20-8f80-0cdde00889e6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-qjdms" [de948697-a423-4c20-8f80-0cdde00889e6] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.008797749s
--- PASS: TestAddons/parallel/Headlamp (13.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-2g4tv" [57bf6bc5-4b04-4912-821f-a954306512f8] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009341895s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-112776
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-112776 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-112776 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7c36a5ab-cd1f-4512-a542-c1b7e56ac1ba] Pending
helpers_test.go:344: "test-local-path" [7c36a5ab-cd1f-4512-a542-c1b7e56ac1ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7c36a5ab-cd1f-4512-a542-c1b7e56ac1ba] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7c36a5ab-cd1f-4512-a542-c1b7e56ac1ba] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.010821794s
addons_test.go:890: (dbg) Run:  kubectl --context addons-112776 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 ssh "cat /opt/local-path-provisioner/pvc-dbf749c2-173c-47fe-82f9-107cdc643fe7_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-112776 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-112776 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-112776 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t78st" [fbcd2671-0323-4c2a-81c3-f3d3726e355b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.030509275s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-112776
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-112776 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-112776 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-112776
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-112776: (11.908035815s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-112776
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-112776
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-112776
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (31.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-712781 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-712781 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.060843651s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-712781 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-712781 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-712781 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-712781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-712781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-712781: (3.983811445s)
--- PASS: TestCertOptions (31.65s)

                                                
                                    
x
+
TestCertExpiration (227.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-804435 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-804435 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.365135173s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-804435 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-804435 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.917116246s)
helpers_test.go:175: Cleaning up "cert-expiration-804435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-804435
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-804435: (2.340501311s)
--- PASS: TestCertExpiration (227.62s)

                                                
                                    
x
+
TestForceSystemdFlag (31.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-741404 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-741404 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.674258672s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-741404 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-741404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-741404
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-741404: (4.977473226s)
--- PASS: TestForceSystemdFlag (31.95s)

                                                
                                    
x
+
TestForceSystemdEnv (25.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-606257 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-606257 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.743152626s)
helpers_test.go:175: Cleaning up "force-systemd-env-606257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-606257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-606257: (2.35820147s)
--- PASS: TestForceSystemdEnv (25.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.91s)

                                                
                                    
x
+
TestErrorSpam/setup (22.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-371863 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-371863 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-371863 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-371863 --driver=docker  --container-runtime=crio: (22.010743697s)
--- PASS: TestErrorSpam/setup (22.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 stop: (1.201208862s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-371863 --log_dir /tmp/nospam-371863 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17644-72381/.minikube/files/etc/test/nested/copy/79153/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-876444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-876444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.642410315s)
--- PASS: TestFunctional/serial/StartWithProxy (40.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-876444 --alsologtostderr -v=8
E1127 11:24:44.766001   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:44.771989   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:44.782251   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:44.802518   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:44.842833   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:44.923885   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:45.084215   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:45.405067   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:46.046008   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:47.326219   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:49.886908   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:24:55.007260   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:25:05.248405   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-876444 --alsologtostderr -v=8: (42.50646572s)
functional_test.go:659: soft start took 42.507237168s for "functional-876444" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-876444 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-876444 /tmp/TestFunctionalserialCacheCmdcacheadd_local345264649/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cache add minikube-local-cache-test:functional-876444
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 cache add minikube-local-cache-test:functional-876444: (1.287776929s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cache delete minikube-local-cache-test:functional-876444
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-876444
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.690968ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 kubectl -- --context functional-876444 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-876444 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-876444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 11:25:25.729065   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-876444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.81137859s)
functional_test.go:757: restart took 32.811493642s for "functional-876444" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-876444 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 logs: (1.353279328s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 logs --file /tmp/TestFunctionalserialLogsFileCmd1432038803/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 logs --file /tmp/TestFunctionalserialLogsFileCmd1432038803/001/logs.txt: (1.347772998s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-876444 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-876444
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-876444: exit status 115 (335.294336ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31378 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-876444 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-876444 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-876444 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 114768: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-876444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-876444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.823697ms)

                                                
                                                
-- stdout --
	* [functional-876444] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:26:09.073998  114004 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:26:09.074165  114004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:09.074177  114004 out.go:309] Setting ErrFile to fd 2...
	I1127 11:26:09.074185  114004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:09.074409  114004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:26:09.075485  114004 out.go:303] Setting JSON to false
	I1127 11:26:09.076678  114004 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7722,"bootTime":1701076647,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:26:09.076765  114004 start.go:138] virtualization: kvm guest
	I1127 11:26:09.078702  114004 out.go:177] * [functional-876444] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:26:09.080785  114004 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:26:09.080797  114004 notify.go:220] Checking for updates...
	I1127 11:26:09.082316  114004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:26:09.083868  114004 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:26:09.085470  114004 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:26:09.087214  114004 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:26:09.088692  114004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:26:09.090752  114004 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:26:09.091420  114004 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:26:09.123047  114004 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:26:09.123184  114004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:26:09.191369  114004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-27 11:26:09.182721583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:26:09.191477  114004 docker.go:295] overlay module found
	I1127 11:26:09.193695  114004 out.go:177] * Using the docker driver based on existing profile
	I1127 11:26:09.195488  114004 start.go:298] selected driver: docker
	I1127 11:26:09.195507  114004 start.go:902] validating driver "docker" against &{Name:functional-876444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-876444 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:26:09.195585  114004 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:26:09.197813  114004 out.go:177] 
	W1127 11:26:09.199418  114004 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 11:26:09.200884  114004 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-876444 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-876444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-876444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.673186ms)

                                                
                                                
-- stdout --
	* [functional-876444] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:26:08.900167  113825 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:26:08.900340  113825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:08.900352  113825 out.go:309] Setting ErrFile to fd 2...
	I1127 11:26:08.900360  113825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:26:08.900702  113825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:26:08.901230  113825 out.go:303] Setting JSON to false
	I1127 11:26:08.902336  113825 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7722,"bootTime":1701076647,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:26:08.902424  113825 start.go:138] virtualization: kvm guest
	I1127 11:26:08.904845  113825 out.go:177] * [functional-876444] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1127 11:26:08.907046  113825 notify.go:220] Checking for updates...
	I1127 11:26:08.908697  113825 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:26:08.910182  113825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:26:08.911767  113825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:26:08.913247  113825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:26:08.914801  113825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:26:08.916281  113825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:26:08.918365  113825 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:26:08.919147  113825 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:26:08.946548  113825 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:26:08.946639  113825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:26:09.000876  113825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-27 11:26:08.992411165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:26:09.000978  113825 docker.go:295] overlay module found
	I1127 11:26:09.002852  113825 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1127 11:26:09.004316  113825 start.go:298] selected driver: docker
	I1127 11:26:09.004336  113825 start.go:902] validating driver "docker" against &{Name:functional-876444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-876444 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:26:09.004446  113825 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:26:09.006681  113825 out.go:177] 
	W1127 11:26:09.008050  113825 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 11:26:09.009448  113825 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-876444 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-876444 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cbd6l" [f86fda1e-1408-479c-b26d-5d0aa46d7207] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cbd6l" [f86fda1e-1408-479c-b26d-5d0aa46d7207] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.009525557s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30097
functional_test.go:1674: http://192.168.49.2:30097: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cbd6l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30097
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [41f16700-8ed6-45f6-af46-ceccf3eecd74] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.036153421s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-876444 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-876444 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-876444 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-876444 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-876444 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [292fbfb3-5c6a-4a60-a1ff-9ab6f6e4ae2b] Pending
helpers_test.go:344: "sp-pod" [292fbfb3-5c6a-4a60-a1ff-9ab6f6e4ae2b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [292fbfb3-5c6a-4a60-a1ff-9ab6f6e4ae2b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.009909771s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-876444 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-876444 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-876444 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [921b0b79-a11b-4e58-93f0-463ceb208fa5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [921b0b79-a11b-4e58-93f0-463ceb208fa5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.030437897s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-876444 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh -n functional-876444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 cp functional-876444:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3191334596/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh -n functional-876444 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-876444 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-lx7wn" [c8ac55bf-662f-420e-98c2-d4d11839ad09] Pending
helpers_test.go:344: "mysql-859648c796-lx7wn" [c8ac55bf-662f-420e-98c2-d4d11839ad09] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-lx7wn" [c8ac55bf-662f-420e-98c2-d4d11839ad09] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.010815326s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-876444 exec mysql-859648c796-lx7wn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-876444 exec mysql-859648c796-lx7wn -- mysql -ppassword -e "show databases;": exit status 1 (143.857089ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-876444 exec mysql-859648c796-lx7wn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/79153/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /etc/test/nested/copy/79153/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/79153.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /etc/ssl/certs/79153.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/79153.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /usr/share/ca-certificates/79153.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/791532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /etc/ssl/certs/791532.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/791532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /usr/share/ca-certificates/791532.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-876444 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh "sudo systemctl is-active docker": exit status 1 (270.878103ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh "sudo systemctl is-active containerd": exit status 1 (278.512018ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-876444 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-876444 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-876444 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 111065: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-876444 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-876444 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-876444 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a6e83094-87ac-4707-901e-8847fcfe4029] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a6e83094-87ac-4707-901e-8847fcfe4029] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.054076257s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-876444 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-876444 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-cl5s9" [469ffc6f-8758-4667-8389-272c2cb415b3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-cl5s9" [469ffc6f-8758-4667-8389-272c2cb415b3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.008821026s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-876444 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.86.18 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-876444 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1127 11:26:06.690151   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "303.307269ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "66.740208ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "283.990368ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "82.376683ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdany-port1833984400/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701084367523948872" to /tmp/TestFunctionalparallelMountCmdany-port1833984400/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701084367523948872" to /tmp/TestFunctionalparallelMountCmdany-port1833984400/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701084367523948872" to /tmp/TestFunctionalparallelMountCmdany-port1833984400/001/test-1701084367523948872
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.353074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 11:26 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 11:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 11:26 test-1701084367523948872
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh cat /mount-9p/test-1701084367523948872
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-876444 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b66427b6-2cac-48ef-9ffb-3f073e1b317d] Pending
helpers_test.go:344: "busybox-mount" [b66427b6-2cac-48ef-9ffb-3f073e1b317d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b66427b6-2cac-48ef-9ffb-3f073e1b317d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b66427b6-2cac-48ef-9ffb-3f073e1b317d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.015668051s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-876444 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdany-port1833984400/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 service list -o json
functional_test.go:1493: Took "539.854142ms" to run "out/minikube-linux-amd64 -p functional-876444 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30932
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30932
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-876444 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-876444
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-876444 image ls --format short --alsologtostderr:
I1127 11:26:35.455566  118783 out.go:296] Setting OutFile to fd 1 ...
I1127 11:26:35.455749  118783 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.455760  118783 out.go:309] Setting ErrFile to fd 2...
I1127 11:26:35.455768  118783 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.455999  118783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
I1127 11:26:35.456613  118783 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.456750  118783 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.457202  118783 cli_runner.go:164] Run: docker container inspect functional-876444 --format={{.State.Status}}
I1127 11:26:35.474413  118783 ssh_runner.go:195] Run: systemctl --version
I1127 11:26:35.474484  118783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876444
I1127 11:26:35.491833  118783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/functional-876444/id_rsa Username:docker}
I1127 11:26:35.583934  118783 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-876444 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | b135667c98980 | 49.5MB |
| gcr.io/google-containers/addon-resizer  | functional-876444  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-876444 image ls --format table --alsologtostderr:
I1127 11:26:35.706940  118948 out.go:296] Setting OutFile to fd 1 ...
I1127 11:26:35.707111  118948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.707116  118948 out.go:309] Setting ErrFile to fd 2...
I1127 11:26:35.707121  118948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.707309  118948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
I1127 11:26:35.708069  118948 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.708215  118948 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.708831  118948 cli_runner.go:164] Run: docker container inspect functional-876444 --format={{.State.Status}}
I1127 11:26:35.737911  118948 ssh_runner.go:195] Run: systemctl --version
I1127 11:26:35.737981  118948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876444
I1127 11:26:35.755734  118948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/functional-876444/id_rsa Username:docker}
I1127 11:26:35.847720  118948 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-876444 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":["docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49538855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-876444"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b
6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/l
ibrary/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a1
9e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18c
c","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller
-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"83f6cc407eed88d214aad97f35
39bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-876444 image ls --format json --alsologtostderr:
I1127 11:26:35.710877  118947 out.go:296] Setting OutFile to fd 1 ...
I1127 11:26:35.710997  118947 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.711007  118947 out.go:309] Setting ErrFile to fd 2...
I1127 11:26:35.711014  118947 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.711261  118947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
I1127 11:26:35.711986  118947 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.712145  118947 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.712718  118947 cli_runner.go:164] Run: docker container inspect functional-876444 --format={{.State.Status}}
I1127 11:26:35.730420  118947 ssh_runner.go:195] Run: systemctl --version
I1127 11:26:35.730474  118947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876444
I1127 11:26:35.747953  118947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/functional-876444/id_rsa Username:docker}
I1127 11:26:35.840015  118947 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-876444 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests:
- docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "49538855"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-876444
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-876444 image ls --format yaml --alsologtostderr:
I1127 11:26:35.460782  118785 out.go:296] Setting OutFile to fd 1 ...
I1127 11:26:35.461076  118785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.461085  118785 out.go:309] Setting ErrFile to fd 2...
I1127 11:26:35.461090  118785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.461312  118785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
I1127 11:26:35.462098  118785 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.462246  118785 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.462820  118785 cli_runner.go:164] Run: docker container inspect functional-876444 --format={{.State.Status}}
I1127 11:26:35.480593  118785 ssh_runner.go:195] Run: systemctl --version
I1127 11:26:35.480664  118785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876444
I1127 11:26:35.499786  118785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/functional-876444/id_rsa Username:docker}
I1127 11:26:35.587751  118785 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh pgrep buildkitd: exit status 1 (276.173582ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image build -t localhost/my-image:functional-876444 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image build -t localhost/my-image:functional-876444 testdata/build --alsologtostderr: (1.822741461s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-876444 image build -t localhost/my-image:functional-876444 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f119d5bb628
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-876444
--> 286bda6d4c0
Successfully tagged localhost/my-image:functional-876444
286bda6d4c03b711f7202e628d4d109c223fee790647504e0a963595fba8f6de
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-876444 image build -t localhost/my-image:functional-876444 testdata/build --alsologtostderr:
I1127 11:26:35.730460  118970 out.go:296] Setting OutFile to fd 1 ...
I1127 11:26:35.730811  118970 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.730856  118970 out.go:309] Setting ErrFile to fd 2...
I1127 11:26:35.730875  118970 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:26:35.731166  118970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
I1127 11:26:35.732043  118970 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.732758  118970 config.go:182] Loaded profile config "functional-876444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 11:26:35.733374  118970 cli_runner.go:164] Run: docker container inspect functional-876444 --format={{.State.Status}}
I1127 11:26:35.751737  118970 ssh_runner.go:195] Run: systemctl --version
I1127 11:26:35.751807  118970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876444
I1127 11:26:35.769891  118970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/functional-876444/id_rsa Username:docker}
I1127 11:26:35.856209  118970 build_images.go:151] Building image from path: /tmp/build.3612860188.tar
I1127 11:26:35.856293  118970 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 11:26:35.865723  118970 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3612860188.tar
I1127 11:26:35.869852  118970 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3612860188.tar: stat -c "%s %y" /var/lib/minikube/build/build.3612860188.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3612860188.tar': No such file or directory
I1127 11:26:35.869890  118970 ssh_runner.go:362] scp /tmp/build.3612860188.tar --> /var/lib/minikube/build/build.3612860188.tar (3072 bytes)
I1127 11:26:35.895473  118970 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3612860188
I1127 11:26:35.904076  118970 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3612860188 -xf /var/lib/minikube/build/build.3612860188.tar
I1127 11:26:35.912331  118970 crio.go:297] Building image: /var/lib/minikube/build/build.3612860188
I1127 11:26:35.912407  118970 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-876444 /var/lib/minikube/build/build.3612860188 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1127 11:26:37.463458  118970 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-876444 /var/lib/minikube/build/build.3612860188 --cgroup-manager=cgroupfs: (1.551022096s)
I1127 11:26:37.463512  118970 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3612860188
I1127 11:26:37.472319  118970 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3612860188.tar
I1127 11:26:37.480509  118970 build_images.go:207] Built localhost/my-image:functional-876444 from /tmp/build.3612860188.tar
I1127 11:26:37.480549  118970 build_images.go:123] succeeded building to: functional-876444
I1127 11:26:37.480557  118970 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.219051671s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-876444
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image load --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image load --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr: (4.59272534s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdspecific-port4278346082/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.163185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdspecific-port4278346082/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh "sudo umount -f /mount-9p": exit status 1 (312.126355ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-876444 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdspecific-port4278346082/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2823504123/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2823504123/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2823504123/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T" /mount1: exit status 1 (351.558037ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-876444 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2823504123/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2823504123/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-876444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2823504123/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image load --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image load --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr: (3.522903018s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image save gcr.io/google-containers/addon-resizer:functional-876444 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image save gcr.io/google-containers/addon-resizer:functional-876444 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.112102872s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image rm gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-876444 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.092574033s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-876444
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-876444 image save --daemon gcr.io/google-containers/addon-resizer:functional-876444 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-876444
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-876444
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-876444
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-876444
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-123827 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1127 11:27:28.610572   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-123827 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m23.883182042s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons enable ingress --alsologtostderr -v=5: (11.280236301s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-123827 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-828215 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1127 11:31:35.743133   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:32:16.703959   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-828215 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.553714484s)
--- PASS: TestJSONOutput/start/Command (66.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-828215 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-828215 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-828215 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-828215 --output=json --user=testUser: (5.789219866s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-354603 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-354603 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.198464ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17e9b96c-0682-4a88-8938-1c9ab6e26bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-354603] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2de536a6-fb59-4ef3-b4ad-a7ebc0dbabf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17644"}}
	{"specversion":"1.0","id":"0748c2c3-f906-40ad-8c17-eecf8dcb0f8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5b574d4-e476-4a44-9d6e-8ca9a78a4686","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig"}}
	{"specversion":"1.0","id":"c39a57aa-0c1e-4b7e-a305-7b8b192ad81e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube"}}
	{"specversion":"1.0","id":"913574e5-0b41-48cc-acb5-5eba612d6373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1b94ecd9-14ac-453a-8bf1-04d72a0157fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c1e3e90e-fc76-4646-ae1d-5b0ef71c4f21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-354603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-354603
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-451141 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-451141 --network=: (28.698926012s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-451141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-451141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-451141: (2.070200895s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-628035 --network=bridge
E1127 11:33:21.581329   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:21.586478   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:21.596800   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:21.617074   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:21.657412   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:21.737792   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:21.898225   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:22.218786   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:22.859876   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:24.140781   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:26.700982   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:33:31.821260   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-628035 --network=bridge: (21.403427281s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-628035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-628035
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-628035: (1.908529085s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.33s)

                                                
                                    
x
+
TestKicExistingNetwork (26.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-311553 --network=existing-network
E1127 11:33:38.626189   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
E1127 11:33:42.062084   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-311553 --network=existing-network: (24.757076063s)
helpers_test.go:175: Cleaning up "existing-network-311553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-311553
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-311553: (1.550407301s)
--- PASS: TestKicExistingNetwork (26.44s)

                                                
                                    
x
+
TestKicCustomSubnet (24.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-707984 --subnet=192.168.60.0/24
E1127 11:34:02.543145   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-707984 --subnet=192.168.60.0/24: (22.64025465s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-707984 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-707984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-707984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-707984: (2.036310678s)
--- PASS: TestKicCustomSubnet (24.69s)

                                                
                                    
x
+
TestKicStaticIP (25.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-540671 --static-ip=192.168.200.200
E1127 11:34:43.503824   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:34:44.766339   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-540671 --static-ip=192.168.200.200: (23.000601903s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-540671 ip
helpers_test.go:175: Cleaning up "static-ip-540671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-540671
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-540671: (2.076311149s)
--- PASS: TestKicStaticIP (25.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (45.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-205378 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-205378 --driver=docker  --container-runtime=crio: (20.717364069s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-207793 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-207793 --driver=docker  --container-runtime=crio: (20.032573195s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-205378
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-207793
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-207793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-207793
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-207793: (1.860586866s)
helpers_test.go:175: Cleaning up "first-205378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-205378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-205378: (1.878159903s)
--- PASS: TestMinikubeProfile (45.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-283985 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-283985 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.274472912s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-283985 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-298161 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-298161 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.39773779s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-298161 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-283985 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-283985 --alsologtostderr -v=5: (1.609689951s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-298161 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-298161
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-298161: (1.213732839s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-298161
E1127 11:35:54.779705   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-298161: (5.996239984s)
--- PASS: TestMountStart/serial/RestartStopped (7.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-298161 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-780990 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1127 11:36:05.424976   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:36:22.466430   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-780990 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m52.165025184s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-780990 -- rollout status deployment/busybox: (2.796841362s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-fxkgq -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-wslrr -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-fxkgq -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-wslrr -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-fxkgq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-780990 -- exec busybox-5bc68d56bd-wslrr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.62s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-780990 -v 3 --alsologtostderr
E1127 11:38:21.582082   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:38:49.265401   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-780990 -v 3 --alsologtostderr: (49.161166619s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.77s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp testdata/cp-test.txt multinode-780990:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile196168833/001/cp-test_multinode-780990.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990:/home/docker/cp-test.txt multinode-780990-m02:/home/docker/cp-test_multinode-780990_multinode-780990-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m02 "sudo cat /home/docker/cp-test_multinode-780990_multinode-780990-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990:/home/docker/cp-test.txt multinode-780990-m03:/home/docker/cp-test_multinode-780990_multinode-780990-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m03 "sudo cat /home/docker/cp-test_multinode-780990_multinode-780990-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp testdata/cp-test.txt multinode-780990-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile196168833/001/cp-test_multinode-780990-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990-m02:/home/docker/cp-test.txt multinode-780990:/home/docker/cp-test_multinode-780990-m02_multinode-780990.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990 "sudo cat /home/docker/cp-test_multinode-780990-m02_multinode-780990.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990-m02:/home/docker/cp-test.txt multinode-780990-m03:/home/docker/cp-test_multinode-780990-m02_multinode-780990-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m03 "sudo cat /home/docker/cp-test_multinode-780990-m02_multinode-780990-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp testdata/cp-test.txt multinode-780990-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile196168833/001/cp-test_multinode-780990-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990-m03:/home/docker/cp-test.txt multinode-780990:/home/docker/cp-test_multinode-780990-m03_multinode-780990.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990 "sudo cat /home/docker/cp-test_multinode-780990-m03_multinode-780990.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 cp multinode-780990-m03:/home/docker/cp-test.txt multinode-780990-m02:/home/docker/cp-test_multinode-780990-m03_multinode-780990-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 ssh -n multinode-780990-m02 "sudo cat /home/docker/cp-test_multinode-780990-m03_multinode-780990-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-780990 node stop m03: (1.233546871s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-780990 status: exit status 7 (477.549184ms)

                                                
                                                
-- stdout --
	multinode-780990
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-780990-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-780990-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr: exit status 7 (480.783725ms)

                                                
                                                
-- stdout --
	multinode-780990
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-780990-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-780990-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:39:04.723052  178536 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:39:04.723357  178536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:39:04.723369  178536 out.go:309] Setting ErrFile to fd 2...
	I1127 11:39:04.723374  178536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:39:04.723602  178536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:39:04.723818  178536 out.go:303] Setting JSON to false
	I1127 11:39:04.723851  178536 mustload.go:65] Loading cluster: multinode-780990
	I1127 11:39:04.723980  178536 notify.go:220] Checking for updates...
	I1127 11:39:04.724410  178536 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:39:04.724435  178536 status.go:255] checking status of multinode-780990 ...
	I1127 11:39:04.725040  178536 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:39:04.743605  178536 status.go:330] multinode-780990 host status = "Running" (err=<nil>)
	I1127 11:39:04.743640  178536 host.go:66] Checking if "multinode-780990" exists ...
	I1127 11:39:04.744024  178536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990
	I1127 11:39:04.761447  178536 host.go:66] Checking if "multinode-780990" exists ...
	I1127 11:39:04.761781  178536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:39:04.761832  178536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990
	I1127 11:39:04.779053  178536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990/id_rsa Username:docker}
	I1127 11:39:04.869068  178536 ssh_runner.go:195] Run: systemctl --version
	I1127 11:39:04.873126  178536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:39:04.883748  178536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:39:04.939915  178536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:56 SystemTime:2023-11-27 11:39:04.930606737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:39:04.940600  178536 kubeconfig.go:92] found "multinode-780990" server: "https://192.168.58.2:8443"
	I1127 11:39:04.940631  178536 api_server.go:166] Checking apiserver status ...
	I1127 11:39:04.940669  178536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:39:04.951322  178536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	I1127 11:39:04.960388  178536 api_server.go:182] apiserver freezer: "8:freezer:/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio/crio-fdcc38c739571b7ae643c95e1e652c13422255a85317650d806ff18a023a80db"
	I1127 11:39:04.960446  178536 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b91bdbce677fe6f82a9f829d9de3e87c315a78c68ff007e9e6f8a0c391b8497f/crio/crio-fdcc38c739571b7ae643c95e1e652c13422255a85317650d806ff18a023a80db/freezer.state
	I1127 11:39:04.968863  178536 api_server.go:204] freezer state: "THAWED"
	I1127 11:39:04.968894  178536 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 11:39:04.973245  178536 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 11:39:04.973271  178536 status.go:421] multinode-780990 apiserver status = Running (err=<nil>)
	I1127 11:39:04.973299  178536 status.go:257] multinode-780990 status: &{Name:multinode-780990 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:39:04.973325  178536 status.go:255] checking status of multinode-780990-m02 ...
	I1127 11:39:04.973586  178536 cli_runner.go:164] Run: docker container inspect multinode-780990-m02 --format={{.State.Status}}
	I1127 11:39:04.990656  178536 status.go:330] multinode-780990-m02 host status = "Running" (err=<nil>)
	I1127 11:39:04.990725  178536 host.go:66] Checking if "multinode-780990-m02" exists ...
	I1127 11:39:04.991026  178536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-780990-m02
	I1127 11:39:05.008187  178536 host.go:66] Checking if "multinode-780990-m02" exists ...
	I1127 11:39:05.008517  178536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:39:05.008562  178536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-780990-m02
	I1127 11:39:05.025409  178536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17644-72381/.minikube/machines/multinode-780990-m02/id_rsa Username:docker}
	I1127 11:39:05.112797  178536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:39:05.123462  178536 status.go:257] multinode-780990-m02 status: &{Name:multinode-780990-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:39:05.123503  178536 status.go:255] checking status of multinode-780990-m03 ...
	I1127 11:39:05.123818  178536 cli_runner.go:164] Run: docker container inspect multinode-780990-m03 --format={{.State.Status}}
	I1127 11:39:05.141011  178536 status.go:330] multinode-780990-m03 host status = "Stopped" (err=<nil>)
	I1127 11:39:05.141037  178536 status.go:343] host is not running, skipping remaining checks
	I1127 11:39:05.141045  178536 status.go:257] multinode-780990-m03 status: &{Name:multinode-780990-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-780990 node start m03 --alsologtostderr: (10.380186087s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-780990
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-780990
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-780990: (24.823934151s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-780990 --wait=true -v=8 --alsologtostderr
E1127 11:39:44.766543   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:40:54.780076   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-780990 --wait=true -v=8 --alsologtostderr: (1m24.653029281s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-780990
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 node delete m03
E1127 11:41:07.811844   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-780990 node delete m03: (4.135859896s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-780990 stop: (23.716341091s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-780990 status: exit status 7 (97.940536ms)

                                                
                                                
-- stdout --
	multinode-780990
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-780990-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr: exit status 7 (99.069414ms)

                                                
                                                
-- stdout --
	multinode-780990
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-780990-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:41:34.436614  188572 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:41:34.436917  188572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:41:34.436929  188572 out.go:309] Setting ErrFile to fd 2...
	I1127 11:41:34.436934  188572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:41:34.437102  188572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:41:34.437269  188572 out.go:303] Setting JSON to false
	I1127 11:41:34.437306  188572 mustload.go:65] Loading cluster: multinode-780990
	I1127 11:41:34.437408  188572 notify.go:220] Checking for updates...
	I1127 11:41:34.437899  188572 config.go:182] Loaded profile config "multinode-780990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:41:34.437922  188572 status.go:255] checking status of multinode-780990 ...
	I1127 11:41:34.438499  188572 cli_runner.go:164] Run: docker container inspect multinode-780990 --format={{.State.Status}}
	I1127 11:41:34.456734  188572 status.go:330] multinode-780990 host status = "Stopped" (err=<nil>)
	I1127 11:41:34.456763  188572 status.go:343] host is not running, skipping remaining checks
	I1127 11:41:34.456772  188572 status.go:257] multinode-780990 status: &{Name:multinode-780990 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:41:34.456802  188572 status.go:255] checking status of multinode-780990-m02 ...
	I1127 11:41:34.457109  188572 cli_runner.go:164] Run: docker container inspect multinode-780990-m02 --format={{.State.Status}}
	I1127 11:41:34.473157  188572 status.go:330] multinode-780990-m02 host status = "Stopped" (err=<nil>)
	I1127 11:41:34.473177  188572 status.go:343] host is not running, skipping remaining checks
	I1127 11:41:34.473183  188572 status.go:257] multinode-780990-m02 status: &{Name:multinode-780990-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-780990 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-780990 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m15.994776583s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-780990 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-780990
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-780990-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-780990-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.060507ms)

                                                
                                                
-- stdout --
	* [multinode-780990-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-780990-m02' is duplicated with machine name 'multinode-780990-m02' in profile 'multinode-780990'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-780990-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-780990-m03 --driver=docker  --container-runtime=crio: (24.214888807s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-780990
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-780990: exit status 80 (277.180625ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-780990
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-780990-m03 already exists in multinode-780990-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-780990-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-780990-m03: (1.879636891s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.51s)

                                                
                                    
x
+
TestPreload (122.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-878430 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-878430 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m8.534443565s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-878430 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-878430 image pull gcr.io/k8s-minikube/busybox: (1.654952121s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-878430
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-878430: (5.70509556s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-878430 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1127 11:44:44.765880   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-878430 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.688131359s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-878430 image list
helpers_test.go:175: Cleaning up "test-preload-878430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-878430
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-878430: (2.282176415s)
--- PASS: TestPreload (122.10s)

                                                
                                    
x
+
TestScheduledStopUnix (100.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-293590 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-293590 --memory=2048 --driver=docker  --container-runtime=crio: (24.452256306s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293590 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-293590 -n scheduled-stop-293590
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293590 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293590 --cancel-scheduled
E1127 11:45:54.779323   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293590 -n scheduled-stop-293590
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293590
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293590 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293590
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-293590: exit status 7 (80.275087ms)

                                                
                                                
-- stdout --
	scheduled-stop-293590
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293590 -n scheduled-stop-293590
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293590 -n scheduled-stop-293590: exit status 7 (79.582646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-293590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-293590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-293590: (4.876606105s)
--- PASS: TestScheduledStopUnix (100.78s)

                                                
                                    
x
+
TestInsufficientStorage (13.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-436502 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-436502 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.828620397s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a1ab6c5-c6e6-4eb8-af79-c68c09950981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-436502] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a61fc25e-2b96-49fb-8ea1-489423eaab31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17644"}}
	{"specversion":"1.0","id":"5eda5b6b-470e-46ef-8a70-8242dfeebd9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1228611-a987-441e-895d-f168b3222027","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig"}}
	{"specversion":"1.0","id":"69fe4e67-7566-4273-8726-3be33d17e860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube"}}
	{"specversion":"1.0","id":"7ca23fec-587f-4574-bc8f-51d7ec063404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4653374d-7a78-4631-ad9f-c146def042a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3cf50fe-0847-4717-9ece-8c89b8c821a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"be9163c1-6770-4ae2-958d-297e3bad7895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"06c99867-647b-414e-bdca-1e27ff1cbe57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d4d0e17-4a24-443f-b37f-c20e6d998bf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b8826867-d3b1-421d-91d0-7602ffcffde9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-436502 in cluster insufficient-storage-436502","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa4df9c4-b34e-425b-b97a-544cd867e9ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1e9d9f6-25cb-4294-807e-6b696621be15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7eba84bf-86e3-42f2-aebb-fee13c2d3cd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-436502 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-436502 --output=json --layout=cluster: exit status 7 (268.579067ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-436502","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-436502","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:47:17.142215  210243 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-436502" does not appear in /home/jenkins/minikube-integration/17644-72381/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-436502 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-436502 --output=json --layout=cluster: exit status 7 (262.208748ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-436502","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-436502","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:47:17.405275  210331 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-436502" does not appear in /home/jenkins/minikube-integration/17644-72381/kubeconfig
	E1127 11:47:17.414378  210331 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/insufficient-storage-436502/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-436502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-436502
E1127 11:47:17.827460   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-436502: (1.833883459s)
--- PASS: TestInsufficientStorage (13.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (365.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (59.573248934s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-052444
E1127 11:48:21.582041   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-052444: (5.778033909s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-052444 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-052444 status --format={{.Host}}: exit status 7 (86.141022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.652261858s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-052444 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (82.313454ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-052444] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.4 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-052444
	    minikube start -p kubernetes-upgrade-052444 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0524442 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.4, by running:
	    
	    minikube start -p kubernetes-upgrade-052444 --kubernetes-version=v1.28.4
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-052444 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.393451361s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-052444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-052444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-052444: (2.269299967s)
--- PASS: TestKubernetesUpgrade (365.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.18s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2335976840.exe start -p missing-upgrade-877590 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2335976840.exe start -p missing-upgrade-877590 --memory=2200 --driver=docker  --container-runtime=crio: (1m42.981681938s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-877590
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-877590: (1.095061055s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-877590
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-877590 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-877590 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.175358189s)
helpers_test.go:175: Cleaning up "missing-upgrade-877590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-877590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-877590: (2.373851367s)
--- PASS: TestMissingContainerUpgrade (171.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-148287
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    
x
+
TestPause/serial/Start (43.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-667171 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1127 11:49:44.625937   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:49:44.766533   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-667171 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.645952348s)
--- PASS: TestPause/serial/Start (43.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-667171 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-667171 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.981324889s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-775004 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-775004 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (80.069853ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-775004] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-775004 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-775004 --driver=docker  --container-runtime=crio: (23.720734765s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-775004 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-701296 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-701296 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (174.427733ms)

                                                
                                                
-- stdout --
	* [false-701296] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:50:14.285916  250638 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:50:14.286100  250638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:50:14.286114  250638 out.go:309] Setting ErrFile to fd 2...
	I1127 11:50:14.286123  250638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:50:14.286337  250638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-72381/.minikube/bin
	I1127 11:50:14.286954  250638 out.go:303] Setting JSON to false
	I1127 11:50:14.288464  250638 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9167,"bootTime":1701076647,"procs":515,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:50:14.288543  250638 start.go:138] virtualization: kvm guest
	I1127 11:50:14.291086  250638 out.go:177] * [false-701296] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:50:14.292828  250638 notify.go:220] Checking for updates...
	I1127 11:50:14.294513  250638 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:50:14.296114  250638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:50:14.297558  250638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-72381/kubeconfig
	I1127 11:50:14.298998  250638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-72381/.minikube
	I1127 11:50:14.300339  250638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:50:14.301711  250638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:50:14.303883  250638 config.go:182] Loaded profile config "NoKubernetes-775004": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:50:14.304062  250638 config.go:182] Loaded profile config "kubernetes-upgrade-052444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:50:14.304259  250638 config.go:182] Loaded profile config "pause-667171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 11:50:14.304369  250638 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:50:14.328678  250638 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 11:50:14.328772  250638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 11:50:14.384622  250638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2023-11-27 11:50:14.374094111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 11:50:14.384733  250638 docker.go:295] overlay module found
	I1127 11:50:14.386520  250638 out.go:177] * Using the docker driver based on user configuration
	I1127 11:50:14.387921  250638 start.go:298] selected driver: docker
	I1127 11:50:14.387937  250638 start.go:902] validating driver "docker" against <nil>
	I1127 11:50:14.387949  250638 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:50:14.390344  250638 out.go:177] 
	W1127 11:50:14.391801  250638 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1127 11:50:14.393299  250638 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-701296 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-701296" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-775004
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:48:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-052444
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-667171
contexts:
- context:
cluster: NoKubernetes-775004
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-775004
name: NoKubernetes-775004
- context:
cluster: kubernetes-upgrade-052444
user: kubernetes-upgrade-052444
name: kubernetes-upgrade-052444
- context:
cluster: pause-667171
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-667171
name: pause-667171
current-context: NoKubernetes-775004
kind: Config
preferences: {}
users:
- name: NoKubernetes-775004
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/NoKubernetes-775004/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/NoKubernetes-775004/client.key
- name: kubernetes-upgrade-052444
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/kubernetes-upgrade-052444/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/kubernetes-upgrade-052444/client.key
- name: pause-667171
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/pause-667171/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/pause-667171/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-701296

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701296"

                                                
                                                
----------------------- debugLogs end: false-701296 [took: 3.838774947s] --------------------------------
helpers_test.go:175: Cleaning up "false-701296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-701296
--- PASS: TestNetworkPlugins/group/false (4.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-775004 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-775004 --no-kubernetes --driver=docker  --container-runtime=crio: (24.437212832s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-775004 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-775004 status -o json: exit status 2 (303.520757ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-775004","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-775004
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-775004: (1.983304089s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.72s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-667171 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-667171 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-667171 --output=json --layout=cluster: exit status 2 (312.154779ms)

                                                
                                                
-- stdout --
	{"Name":"pause-667171","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-667171","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-667171 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-667171 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-667171 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-667171 --alsologtostderr -v=5: (2.711050923s)
--- PASS: TestPause/serial/DeletePaused (2.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (28.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (28.481784173s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-667171
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-667171: exit status 1 (16.03893ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-667171: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (28.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-775004 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-775004 --no-kubernetes --driver=docker  --container-runtime=crio: (7.487085764s)
--- PASS: TestNoKubernetes/serial/Start (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-775004 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-775004 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.113696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1127 11:50:54.779289   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.291368263s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-775004
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-775004: (1.231971924s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-775004 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-775004 --driver=docker  --container-runtime=crio: (6.726949183s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-775004 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-775004 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.93069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-392229 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-392229 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m0.657873534s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-456190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-456190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (54.259914669s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-456190 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e485554-da71-44d4-b2a3-30589ddafa88] Pending
helpers_test.go:344: "busybox" [4e485554-da71-44d4-b2a3-30589ddafa88] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e485554-da71-44d4-b2a3-30589ddafa88] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.015759885s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-456190 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-456190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-456190 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-456190 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-456190 --alsologtostderr -v=3: (12.009705025s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456190 -n no-preload-456190
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456190 -n no-preload-456190: exit status 7 (141.449436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-456190 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (333.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-456190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1127 11:53:21.581847   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-456190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m32.943764983s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456190 -n no-preload-456190
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (333.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-392229 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a03efbbd-2f4f-4abc-a48d-d89389812801] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a03efbbd-2f4f-4abc-a48d-d89389812801] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.015261208s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-392229 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-914191 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-914191 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m6.821311275s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-392229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-392229 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-392229 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-392229 --alsologtostderr -v=3: (11.994292836s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-392229 -n old-k8s-version-392229
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-392229 -n old-k8s-version-392229: exit status 7 (115.480736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-392229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (64.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-392229 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-392229 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m3.798979032s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-392229 -n old-k8s-version-392229
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (64.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-914191 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a482a63-373e-438e-bf2e-549b0559ef66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4a482a63-373e-438e-bf2e-549b0559ef66] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.016253508s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-914191 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-914191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-914191 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-914191 --alsologtostderr -v=3
E1127 11:54:44.766188   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-914191 --alsologtostderr -v=3: (11.95751887s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-4rk7s" [ac2b464e-e200-4bba-aab0-8441ebc23cd9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021426282s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-914191 -n embed-certs-914191
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-914191 -n embed-certs-914191: exit status 7 (83.632473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-914191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-914191 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-914191 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m35.683296913s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-914191 -n embed-certs-914191
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-623774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-623774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m9.691418023s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-4rk7s" [ac2b464e-e200-4bba-aab0-8441ebc23cd9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010472681s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-392229 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-392229 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-392229 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-392229 --alsologtostderr -v=1: (1.03865212s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-392229 -n old-k8s-version-392229
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-392229 -n old-k8s-version-392229: exit status 2 (389.241383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-392229 -n old-k8s-version-392229
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-392229 -n old-k8s-version-392229: exit status 2 (423.809807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-392229 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-392229 -n old-k8s-version-392229
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-392229 -n old-k8s-version-392229
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-327431 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-327431 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (36.568735628s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-327431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-327431 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-327431 --alsologtostderr -v=3: (1.210705793s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-327431 -n newest-cni-327431
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-327431 -n newest-cni-327431: exit status 7 (79.621421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-327431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-327431 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1127 11:55:54.779841   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/functional-876444/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-327431 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (25.638691759s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-327431 -n newest-cni-327431
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-623774 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ff39fe3-075b-4b71-857c-c59c25c34b10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7ff39fe3-075b-4b71-857c-c59c25c34b10] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.016936223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-623774 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-327431 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-327431 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-327431 -n newest-cni-327431
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-327431 -n newest-cni-327431: exit status 2 (295.338553ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-327431 -n newest-cni-327431
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-327431 -n newest-cni-327431: exit status 2 (296.206062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-327431 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-327431 -n newest-cni-327431
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-327431 -n newest-cni-327431
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-623774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-623774 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-623774 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-623774 --alsologtostderr -v=3: (11.971756426s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m11.422468768s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774: exit status 7 (91.587786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-623774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-623774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-623774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m38.006588379s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9f845" [ded8fc22-35b3-4ec9-99b8-c7390f15f4be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9f845" [ded8fc22-35b3-4ec9-99b8-c7390f15f4be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.008423547s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1127 11:58:21.581583   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/ingress-addon-legacy-123827/client.crt: no such file or directory
E1127 11:58:23.356457   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.361706   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.371966   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.392227   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.432461   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.512679   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.673622   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:23.994205   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:24.635108   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:25.915766   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:28.476865   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
E1127 11:58:33.597943   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.229570082s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rkn5k" [209d3eb0-a55c-4313-9385-9cd09d79c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rkn5k" [209d3eb0-a55c-4313-9385-9cd09d79c7bf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.015931424s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nfjtm" [4dd2d384-55ca-4f96-a30d-dd8a259b6240] Running
E1127 11:58:43.838867   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019306689s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rkn5k" [209d3eb0-a55c-4313-9385-9cd09d79c7bf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009739948s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-456190 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-btm9p" [fb4fca71-51c8-4081-9050-642922406110] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-btm9p" [fb4fca71-51c8-4081-9050-642922406110] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00814689s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-456190 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-456190 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456190 -n no-preload-456190
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456190 -n no-preload-456190: exit status 2 (310.973696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-456190 -n no-preload-456190
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-456190 -n no-preload-456190: exit status 2 (303.735926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-456190 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456190 -n no-preload-456190
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-456190 -n no-preload-456190
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1127 11:59:04.319565   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.310648152s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1127 11:59:44.766036   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/addons-112776/client.crt: no such file or directory
E1127 11:59:45.280298   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.831833212s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-88bgw" [df6c6820-de01-4b31-a86a-87371dc56a3b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026321058s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bdw9k" [4e5515c3-4492-4502-8846-5f163e86b2d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bdw9k" [4e5515c3-4492-4502-8846-5f163e86b2d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.009329889s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ctpnk" [bff891a0-dbed-43ea-80f3-e143ac2b51eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ctpnk" [bff891a0-dbed-43ea-80f3-e143ac2b51eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.008929549s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7n8c7" [59da1a49-dd6b-4329-ab31-9497ea915165] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7n8c7" [59da1a49-dd6b-4329-ab31-9497ea915165] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.021260323s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (44.2138029s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7n8c7" [59da1a49-dd6b-4329-ab31-9497ea915165] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009789379s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-914191 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.841161583s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-914191 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-914191 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-914191 -n embed-certs-914191
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-914191 -n embed-certs-914191: exit status 2 (339.807354ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-914191 -n embed-certs-914191
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-914191 -n embed-certs-914191: exit status 2 (331.442158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-914191 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-914191 --alsologtostderr -v=1: (1.100872401s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-914191 -n embed-certs-914191
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-914191 -n embed-certs-914191
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1127 12:01:07.201304   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/old-k8s-version-392229/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-701296 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.52214649s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4qscv" [bc004234-75d3-4bfe-96b3-3e0c6eec0e9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4qscv" [bc004234-75d3-4bfe-96b3-3e0c6eec0e9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008098523s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7pphm" [71213082-81ff-4674-a7a0-d9f613c3df45] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015767233s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vfj8q" [be0b30eb-00c2-4ee5-ac3c-756a91a7f81e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vfj8q" [be0b30eb-00c2-4ee5-ac3c-756a91a7f81e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009440974s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xpcl9" [3f14f911-aefb-4e41-8a38-30b4aa02fd17] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xpcl9" [3f14f911-aefb-4e41-8a38-30b4aa02fd17] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.017026597s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xpcl9" [3f14f911-aefb-4e41-8a38-30b4aa02fd17] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009974233s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-623774 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-701296 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-701296 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b4cgg" [0e762030-5c97-47f7-8e8a-a3f14dc8bb2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b4cgg" [0e762030-5c97-47f7-8e8a-a3f14dc8bb2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.008090244s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-623774 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-623774 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774: exit status 2 (315.768581ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774: exit status 2 (309.734905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-623774 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-623774 -n default-k8s-diff-port-623774
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-701296 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-701296 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (24/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-948853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-948853
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-701296 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-701296" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:48:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-052444
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-667171
contexts:
- context:
cluster: kubernetes-upgrade-052444
user: kubernetes-upgrade-052444
name: kubernetes-upgrade-052444
- context:
cluster: pause-667171
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-667171
name: pause-667171
current-context: pause-667171
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-052444
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/kubernetes-upgrade-052444/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/kubernetes-upgrade-052444/client.key
- name: pause-667171
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/pause-667171/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/pause-667171/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-701296

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701296"

                                                
                                                
----------------------- debugLogs end: kubenet-701296 [took: 3.609972396s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-701296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-701296
--- SKIP: TestNetworkPlugins/group/kubenet (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-701296 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-701296" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-775004
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:48:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-052444
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17644-72381/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-667171
contexts:
- context:
cluster: NoKubernetes-775004
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-775004
name: NoKubernetes-775004
- context:
cluster: kubernetes-upgrade-052444
user: kubernetes-upgrade-052444
name: kubernetes-upgrade-052444
- context:
cluster: pause-667171
extensions:
- extension:
last-update: Mon, 27 Nov 2023 11:50:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-667171
name: pause-667171
current-context: NoKubernetes-775004
kind: Config
preferences: {}
users:
- name: NoKubernetes-775004
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/NoKubernetes-775004/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/NoKubernetes-775004/client.key
- name: kubernetes-upgrade-052444
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/kubernetes-upgrade-052444/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/kubernetes-upgrade-052444/client.key
- name: pause-667171
user:
client-certificate: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/pause-667171/client.crt
client-key: /home/jenkins/minikube-integration/17644-72381/.minikube/profiles/pause-667171/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-701296

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-701296" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701296"

                                                
                                                
----------------------- debugLogs end: cilium-701296 [took: 3.801301215s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-701296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-701296
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
Copied to clipboard